Search Results

Search found 13331 results on 534 pages for 'fluent interface'.

Page 187/534 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • Linux pptp client stops working after several hours

    - by Aron Rotteveel
    Here's the situation: Setup: 1 Windows Server 2008 machine acting as a Domain Controller and RRAS server 1 CentOS machine in a datacentre located elsewhere PPTP client running on CentOS machine, connected to the DC via When I connect to the DC, everything is working fine. I have set up a static IP for the dialup connection in my RRAS server so that the CentOS machine is automatically assigned the IP 192.168.1.240. Inside the VPN, it is not possible to access this machine on the local IP-address. Perfect. However, after several hours, it simply seems to stop working (IE: I cannot ping to or from this machine on the local network). The strange thing is, however: The DC shows the VPN client as still being connected The CentOS machine shows the network interface as being up There are no entries in my /var/log/messages that indicate a problem Output from ifconfig: ppp0 Link encap:Point-to-Point Protocol inet addr:192.168.1.240 P-t-P:192.168.1.160 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1396 Metric:1 RX packets:43 errors:0 dropped:0 overruns:0 frame:0 TX packets:58 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:4511 (4.4 KiB) TX bytes:15071 (14.7 KiB) Output from route -n: 192.168.1.160 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 ppp0 I have the following in my ip-up.local: route add -net 192.168.1.0 netmask 255.255.255.0 dev ppp0 The situation can be easily fixed by issueing a killall pppd and re-connecting. However, I obviously do not want to do this every X-hours or so. I have tried running pppd with both the debug as the kdebug flag but cannot find the cause of this problem. Currently, my ppp0 network interface seems to be running and the last log lines mentioning it are: Feb 19 14:10:40 graviton pppd[10934]: local IP address 192.168.1.240 Feb 19 14:10:40 graviton pppd[10934]: remote IP address 192.168.1.160 Feb 19 14:10:40 graviton pppd[10934]: Script /etc/ppp/ip-up started (pid 10952) Feb 19 14:10:40 graviton pppd[10934]: Script /etc/ppp/ip-up finished (pid 10952), status = 0x0 Feb 19 14:11:27 graviton pptp[10935]: anon log[decaps_gre:pptp_gre.c:414]: buffering packet 190 (expecting 189, lost or reordered) Feb 19 14:11:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Request received. Feb 19 14:11:37 graviton pptp[10942]: anon log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 6 'Echo-Reply' Feb 19 14:12:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Request received. Feb 19 14:12:37 graviton pptp[10942]: anon log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 6 'Echo-Reply' Feb 19 14:12:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:13:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:14:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:15:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:16:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:19:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:19:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:679]: no more Echo Reply/Request packets will be reported. I have enabled the persist option. The network interface is still running, but it is still impossible to send data through the VPN. Any help is appreciated.

    Read the article

  • Persistent static routes fail on MacOS 10.6.5 startup!

    - by verbalicious
    I'm unable to get static routes to persist a reboot on Mac OS 10.6.5. I've tried all of the methods prescribed in Google search results, and previous posts on this site. I've tried manually creating a launchd daemon, and used RouteSplit's launchd daemon to no avail. It's clear that the interface is not ready when these methods attempt to apply the route. This workstation in question is getting its IP from DHCP and probably hasn't gotten its DHCP lease when the command runs. We're able to apply the route by hand when logged in, but not through startup methods. Is there another way to apply this route by sneaking the command into something later, but before the login window appears to the user? Here is some relevant log info from system.log. You can see the "route: writing to routing socket: Network is unreachable" errors where my launchd script fires off. I've tried adding extra "sleep" and "ipconfig waitall" statements later in the script but this doesn't fly. Dec 15 19:30:41 localhost com.apple.launchd[1]: *** launchd[1] has started up. *** Dec 15 19:30:45 localhost mDNSResponder[18]: mDNSResponder mDNSResponder-258.13 (Oct 8 2010 17:10:30) starting Dec 15 19:30:47 localhost configd[15]: bootp_session_transmit: bpf_write(en1) failed: Network is down (50) Dec 15 19:30:47 localhost configd[15]: DHCP en1: INIT transmit failed Dec 15 19:30:47 localhost configd[15]: network configuration changed. Dec 15 19:30:47 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:30:47 Administrators-MacBook-Pro blued[16]: Apple Bluetooth daemon started Dec 15 19:30:52 Administrators-MacBook-Pro syslog[67]: routes.sh: Starting RouteSplit Dec 15 19:30:53 Administrators-MacBook-Pro com.apple.usbmuxd[41]: usbmuxd-207 built for iTunesTenOne on Oct 19 2010 at 13:50:35, running 64 bit Dec 15 19:30:54 Administrators-MacBook-Pro /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow[50]: Login Window Application Started Dec 15 19:30:55 Administrators-MacBook-Pro bootlog[61]: BOOT_TIME: 1292459441 0 Dec 15 19:30:55 Administrators-MacBook-Pro syslog[86]: routes.sh: static route 192.168.0.0/23 192.168.2.2 Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: route: writing to routing socket: Network is unreachable Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: add net 192.168.0.0: gateway 192.168.2.2: Network is unreachable Dec 15 19:30:57 Administrators-MacBook-Pro org.apache.httpd[38]: httpd: Could not reliably determine the server's fully qualified domain name, using Administrators-MacBook-Pro.local for ServerName Dec 15 19:30:58 Administrators-MacBook-Pro loginwindow[50]: Login Window Started Security Agent Dec 15 19:30:58 Administrators-MacBook-Pro WindowServer[89]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:30:58 Administrators-MacBook-Pro com.apple.WindowServer[89]: Wed Dec 15 19:30:58 Administrators-MacBook-Pro.local WindowServer[89] <Error>: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:31:18 Administrators-MacBook-Pro configd[15]: network configuration changed. Dec 15 19:31:19 administrators-macbook-pro configd[15]: setting hostname to "administrators-macbook-pro.local" Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[121]: /usr/libexec/ntpd-wrapper: scutil key State:/Network/Global/DNS not present after 30 seconds Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp options: a=2 v=1 e=0.100 E=5.000 P=2147483647.000 Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: d=15 c=5 x=0 op=1 l=/var/run/sntp.pid f= time.apple.com Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp: getaddrinfo(hostname, ntp) failed with nodename nor servname provided, or not known Dec 15 19:31:27 administrators-macbook-pro configd[15]: network configuration changed. Dec 15 19:31:27 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:31:27 Administrators-MacBook-Pro ntpd[37]: Cannot find existing interface for address 17.151.16.20 Dec 15 19:31:27 Administrators-MacBook-Pro ntpd_initres[125]: ntpd indicates no data available! Dec 15 19:31:31 Administrators-MacBook-Pro sshd[128]: USER_PROCESS: 133 ttys000 Dec 15 19:31:37 Administrators-MacBook-Pro sudo[138]: administrator : TTY=ttys000 ; PWD=/Users/administrator ; USER=root ; COMMAND=/usr/bin/less /var/log/system.log ``You can see the following line in /var/log/kernel.log that shows the en0 interface coming up: Dec 15 19:30:51 Administrators-MacBook-Pro kernel[0]: Ethernet [AppleBCM5701Ethernet]: Link up on en0, 1-Gigabit, Full-duplex, No flow-control, Debug [796d,0f01,0de1,0300,c1e1,3800]

    Read the article

  • Cisco ASA (Client VPN) to LAN - through second VPN to second LAN

    - by user50855
    We have 2 site that is linked by an IPSEC VPN to remote Cisco ASAs: Site 1 1.5Mb T1 Connection Cisco(1) 2841 Site 2 1.5Mb T1 Connection Cisco 2841 In addition: Site 1 has a 2nd WAN 3Mb bonded T1 Connection Cisco 5510 that connects to same LAN as Cisco(1) 2841. Basically, Remote Access (VPN) users connecting through Cisco ASA 5510 needs access to a service at the end of Site 2. This is due to the way the service is sold - Cisco 2841 routers are not under our management and it is setup to allow connection from local LAN VLAN 1 IP address 10.20.0.0/24. My idea is to have all traffic from Remote Users through Cisco ASA destined for Site 2 to go via the VPN between Site 1 and Site 2. The end result being all traffic that hits Site 2 has come via Site 1. I'm struggling to find a great deal of information on how this is setup. So, firstly, can anyone confirm that what I'm trying to achieve is possible? Secondly, can anyone help me to correct the configuration bellow or point me in the direction of an example of such a configuration? Many Thanks. interface Ethernet0/0 nameif outside security-level 0 ip address 7.7.7.19 255.255.255.240 interface Ethernet0/1 nameif inside security-level 100 ip address 10.20.0.249 255.255.255.0 object-group network group-inside-vpnclient description All inside networks accessible to vpn clients network-object 10.20.0.0 255.255.255.0 network-object 10.20.1.0 255.255.255.0 object-group network group-adp-network description ADP IP Address or network accessible to vpn clients network-object 207.207.207.173 255.255.255.255 access-list outside_access_in extended permit icmp any any echo-reply access-list outside_access_in extended permit icmp any any source-quench access-list outside_access_in extended permit icmp any any unreachable access-list outside_access_in extended permit icmp any any time-exceeded access-list outside_access_in extended permit tcp any host 7.7.7.20 eq smtp access-list outside_access_in extended permit tcp any host 7.7.7.20 eq https access-list outside_access_in extended permit tcp any host 7.7.7.20 eq pop3 access-list outside_access_in extended permit tcp any host 7.7.7.20 eq www access-list outside_access_in extended permit tcp any host 7.7.7.21 eq www access-list outside_access_in extended permit tcp any host 7.7.7.21 eq https access-list outside_access_in extended permit tcp any host 7.7.7.21 eq 5721 access-list acl-vpnclient extended permit ip object-group group-inside-vpnclient any access-list acl-vpnclient extended permit ip object-group group-inside-vpnclient object-group group-adp-network access-list acl-vpnclient extended permit ip object-group group-adp-network object-group group-inside-vpnclient access-list PinesFLVPNTunnel_splitTunnelAcl standard permit 10.20.0.0 255.255.255.0 access-list inside_nat0_outbound_1 extended permit ip 10.20.0.0 255.255.255.0 10.20.1.0 255.255.255.0 access-list inside_nat0_outbound_1 extended permit ip 10.20.0.0 255.255.255.0 host 207.207.207.173 access-list inside_nat0_outbound_1 extended permit ip 10.20.1.0 255.255.255.0 host 207.207.207.173 ip local pool VPNPool 10.20.1.100-10.20.1.200 mask 255.255.255.0 route outside 0.0.0.0 0.0.0.0 7.7.7.17 1 route inside 207.207.207.173 255.255.255.255 10.20.0.3 1 crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set transform-set ESP-3DES-SHA crypto dynamic-map outside_dyn_map 20 set security-association lifetime seconds 288000 crypto dynamic-map outside_dyn_map 20 set security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set reverse-route crypto map outside_map 20 ipsec-isakmp dynamic outside_dyn_map crypto map outside_map interface outside crypto map outside_dyn_map 20 match address acl-vpnclient crypto map outside_dyn_map 20 set security-association lifetime seconds 28800 crypto map outside_dyn_map 20 set security-association lifetime kilobytes 4608000 crypto isakmp identity address crypto isakmp enable outside crypto isakmp policy 20 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 group-policy YeahRightflVPNTunnel internal group-policy YeahRightflVPNTunnel attributes wins-server value 10.20.0.9 dns-server value 10.20.0.9 vpn-tunnel-protocol IPSec password-storage disable pfs disable split-tunnel-policy tunnelspecified split-tunnel-network-list value acl-vpnclient default-domain value YeahRight.com group-policy YeahRightFLVPNTunnel internal group-policy YeahRightFLVPNTunnel attributes wins-server value 10.20.0.9 dns-server value 10.20.0.9 10.20.0.7 vpn-tunnel-protocol IPSec split-tunnel-policy tunnelspecified split-tunnel-network-list value YeahRightFLVPNTunnel_splitTunnelAcl default-domain value yeahright.com tunnel-group YeahRightFLVPN type remote-access tunnel-group YeahRightFLVPN general-attributes address-pool VPNPool tunnel-group YeahRightFLVPNTunnel type remote-access tunnel-group YeahRightFLVPNTunnel general-attributes address-pool VPNPool authentication-server-group WinRadius default-group-policy YeahRightFLVPNTunnel tunnel-group YeahRightFLVPNTunnel ipsec-attributes pre-shared-key *

    Read the article

  • Tunnel is up but cannot ping directly connected network

    - by drmanalo
    We configured a site-to-site VPN and here is the topology. I control the network on the left but not the one on the right. All devices in our network has public IPs. Server---ASA5505---Cisco887======Internet=====ASA5510---devices I can see the tunnel is up and can do extended ping using a loopback interface. From the 10.175 and 10.165 networks, they can also ping my loopback address. I can also dial in using a Cisco VPN client, and can connect to the devices on the right. #show crypto session Crypto session current status Interface: Vlan3 Profile: xxx-profile Session status: UP-ACTIVE Peer: 213.121.x.x port 500 IKEv1 SA: local 77.245.x.x/500 remote 213.121.x.x/500 Active IPSEC FLOW: permit ip 10.0.20.0/255.255.255.240 10.175.0.0/255.255.128.0 Active SAs: 0, origin: crypto map IPSEC FLOW: permit ip 10.0.20.0/255.255.255.240 10.165.0.0/255.255.192.0 Active SAs: 2, origin: crypto map #ping 10.165.29.39 source loopback 2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.165.29.39, timeout is 2 seconds: Packet sent with a source address of 10.0.20.1 !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 16/17/20 ms My problem is the devices on the right cannot reach my server. They could only ping the loopback address and nothing else. I'm pasting some diagnostics related to routing thinking perhaps routing is my issue. I can paste all the running-config on my side of network if needed. #show ip int brief Interface IP-Address OK? Method Status Protocol ATM0 unassigned YES NVRAM administratively down down Ethernet0 unassigned YES NVRAM administratively down down FastEthernet0 unassigned YES unset up up connected to ASA FastEthernet1 unassigned YES unset administratively down down FastEthernet2 unassigned YES unset administratively down down FastEthernet3 unassigned YES unset up up Loopback1 10.0.20.65 YES NVRAM up up Loopback2 10.0.20.1 YES NVRAM up up Virtual-Template1 77.245.x.x YES unset up down Virtual-Template2 77.245.x.x YES unset up down Vlan1 unassigned YES unset down down Vlan3 77.245.x.x YES NVRAM up up connected to the Internet #show run | section ip route ip route 0.0.0.0 0.0.0.0 77.245.x.x ip route 213.121.240.36 255.255.255.255 Vlan3 #show access-list Extended IP access list 102 10 permit ip 10.0.20.0 0.0.0.15 10.175.0.0 0.0.127.255 (3332 matches) 20 permit ip 10.0.20.0 0.0.0.15 10.165.0.0 0.0.63.255 (3498 matches) #show vlan-switch VLAN Name Status Ports ---- -------------------------------- --------- ------------------------------- 1 default active 3 VLAN0003 active Fa0, Fa1, Fa2, Fa3 1002 fddi-default act/unsup 1003 token-ring-default act/unsup 1004 fddinet-default act/unsup 1005 trnet-default act/unsup #show ip route Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, * - candidate default, U - per-user static route o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP + - replicated route, % - next hop override Gateway of last resort is 77.245.x.x to network 0.0.0.0 S* 0.0.0.0/0 [1/0] via 77.245.x.x 10.0.0.0/8 is variably subnetted, 5 subnets, 3 masks C 10.0.20.0/28 is directly connected, Loopback2 L 10.0.20.1/32 is directly connected, Loopback2 C 10.0.20.64/28 is directly connected, Loopback1 L 10.0.20.65/32 is directly connected, Loopback1 S 10.165.0.0/18 [1/0] via 213.121.x.x 77.0.0.0/8 is variably subnetted, 3 subnets, 3 masks S 77.0.0.0/8 [1/0] via 77.245.x.x C 77.245.x.x/29 is directly connected, Vlan3 L 77.245.x.x/32 is directly connected, Vlan3 213.121.x.0/32 is subnetted, 1 subnets S 213.121.x.x is directly connected, Vlan3 I read some of the posts here which lead to NATing issue but I'not sure of my next step. Should I translate my public address to private and route it to the loopback address? (only guessing) CISCO VPN site to site Site-to-Site VPN between two ASA 5505s only working in one direction Hope someone could help. Thanks in advance!

    Read the article

  • Persistent static routes fail on MacOS 10.6.5 startup!

    - by verbalicious
    I'm unable to get static routes to persist a reboot on Mac OS 10.6.5. I've tried all of the methods prescribed in Google search results, and previous posts on this site. I've tried manually creating a launchd daemon, and used RouteSplit's launchd daemon to no avail. It's clear that the interface is not ready when these methods attempt to apply the route. This workstation in question is getting its IP from DHCP and probably hasn't gotten its DHCP lease when the command runs. We're able to apply the route by hand when logged in, but not through startup methods. Is there another way to apply this route by sneaking the command into something later, but before the login window appears to the user? Here is some relevant log info from system.log. You can see the "route: writing to routing socket: Network is unreachable" errors where my launchd script fires off. I've tried adding extra "sleep" and "ipconfig waitall" statements later in the script but this doesn't fly. Dec 15 19:30:41 localhost com.apple.launchd[1]: *** launchd[1] has started up. *** Dec 15 19:30:45 localhost mDNSResponder[18]: mDNSResponder mDNSResponder-258.13 (Oct 8 2010 17:10:30) starting Dec 15 19:30:47 localhost configd[15]: bootp_session_transmit: bpf_write(en1) failed: Network is down (50) Dec 15 19:30:47 localhost configd[15]: DHCP en1: INIT transmit failed Dec 15 19:30:47 localhost configd[15]: network configuration changed. Dec 15 19:30:47 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:30:47 Administrators-MacBook-Pro blued[16]: Apple Bluetooth daemon started Dec 15 19:30:52 Administrators-MacBook-Pro syslog[67]: routes.sh: Starting RouteSplit Dec 15 19:30:53 Administrators-MacBook-Pro com.apple.usbmuxd[41]: usbmuxd-207 built for iTunesTenOne on Oct 19 2010 at 13:50:35, running 64 bit Dec 15 19:30:54 Administrators-MacBook-Pro /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow[50]: Login Window Application Started Dec 15 19:30:55 Administrators-MacBook-Pro bootlog[61]: BOOT_TIME: 1292459441 0 Dec 15 19:30:55 Administrators-MacBook-Pro syslog[86]: routes.sh: static route 192.168.0.0/23 192.168.2.2 Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: route: writing to routing socket: Network is unreachable Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: add net 192.168.0.0: gateway 192.168.2.2: Network is unreachable Dec 15 19:30:57 Administrators-MacBook-Pro org.apache.httpd[38]: httpd: Could not reliably determine the server's fully qualified domain name, using Administrators-MacBook-Pro.local for ServerName Dec 15 19:30:58 Administrators-MacBook-Pro loginwindow[50]: Login Window Started Security Agent Dec 15 19:30:58 Administrators-MacBook-Pro WindowServer[89]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:30:58 Administrators-MacBook-Pro com.apple.WindowServer[89]: Wed Dec 15 19:30:58 Administrators-MacBook-Pro.local WindowServer[89] <Error>: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:31:18 Administrators-MacBook-Pro configd[15]: network configuration changed. Dec 15 19:31:19 administrators-macbook-pro configd[15]: setting hostname to "administrators-macbook-pro.local" Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[121]: /usr/libexec/ntpd-wrapper: scutil key State:/Network/Global/DNS not present after 30 seconds Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp options: a=2 v=1 e=0.100 E=5.000 P=2147483647.000 Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: d=15 c=5 x=0 op=1 l=/var/run/sntp.pid f= time.apple.com Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp: getaddrinfo(hostname, ntp) failed with nodename nor servname provided, or not known Dec 15 19:31:27 administrators-macbook-pro configd[15]: network configuration changed. Dec 15 19:31:27 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:31:27 Administrators-MacBook-Pro ntpd[37]: Cannot find existing interface for address 17.151.16.20 Dec 15 19:31:27 Administrators-MacBook-Pro ntpd_initres[125]: ntpd indicates no data available! Dec 15 19:31:31 Administrators-MacBook-Pro sshd[128]: USER_PROCESS: 133 ttys000 Dec 15 19:31:37 Administrators-MacBook-Pro sudo[138]: administrator : TTY=ttys000 ; PWD=/Users/administrator ; USER=root ; COMMAND=/usr/bin/less /var/log/system.log ``You can see the following line in /var/log/kernel.log that shows the en0 interface coming up: Dec 15 19:30:51 Administrators-MacBook-Pro kernel[0]: Ethernet [AppleBCM5701Ethernet]: Link up on en0, 1-Gigabit, Full-duplex, No flow-control, Debug [796d,0f01,0de1,0300,c1e1,3800]

    Read the article

  • VirtualBox - Public Static IP for a Debian Guest on a Dedicated Server

    - by user86296
    Goal: I want to run a Debian-squeeze-Guest in VirtualBox and it's own public static ip. I found tons of threads about this topic, but all in all I'm now trying for 10 hours (reading the manual, the forums, trying to learn about networking concepts & commands) to give a Guest his own public static ip (so that the Guest is similar to a vServer you can order from a hosting company), but wasn't able to. Since I'm a big noob as far as networking stuff is concerned, I'm probably doing something wrong.(please bear with me :-) ) Situation: VirtualBox 4.0.10 (headless no gui) is running on a dedicated Debian-Server, the Guest OS is Debian as well. The server has a static ip and I ordered an additional ip for a VM. Problem description: Upto now I was able to use NAT to access the VM from the outside and to setup an internal network between several Guests and all of this worked very well. When setting NIC 1 to bridged and configuring a public static ip on the guest, the guest was unpingable. (neither from outside, nor from the host) I could connect to the guest via the internal network, from another vm, though. ( VBoxManage controlvm VMGuest nic1 bridged eth0 ) ( configuration attempt of static-ip on the guest '/etc/network/interfaces' is below) Please let me know what I'm doing wrong, or what I can try to get it to work, or if you need more info. I think I've read that with a current VirtualBox-version for bridged networking no special host-configuration is necessary, is that accurate, or might that be the problem? Additional Info Info I got from the hosting company about the additional IP Please note that you can use the IP address only for this server. IP: 46.4.xx.xx Gateway: 46.4.xx.xx Mask: 255.255.255.248 VBoxManage showvminfo VMGuest |less ... NIC 1: MAC: 080027D72F7B, Attachment: Bridged Interface 'eth0', Cable connected: on, Trace: off (file: none), Type: 82540EM, Reported speed: 0 Mbps, Boot priority: 0 NIC 2: MAC: 080027B03B75, Attachment: Internal Network 'InternalNet1', Cable connected: on, Trace: off (file: none), Type: Am79C973, Reported speed: 0 Mbps, Boot priority: 0 NIC 3: disabled (...rest is disabled) cat /etc/network/interfaces on the Host-machine # Loopback device: auto lo iface lo inet loopback # device: eth0 auto eth0 iface eth0 inet static address 46.4.xx.xx broadcast 46.4.xx.xx netmask 255.255.255.224 gateway 46.4.xx.xx post-up mii-tool -F 100baseTx-FD eth0 # default route to access subnet up route add -net 46.4.xx.xx netmask 255.255.255.224 gw 46.4.xx.xx eth0 cat /etc/network/interfaces on the Guest-VM # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface allow-hotplug eth0 auto eth0 iface eth0 inet static address 46.4.xx.xx netmask 255.255.255.248 gateway 46.4.xx.xx auto eth1 iface eth1 inet dhcp ifconfig -a on the Guest shows the correct static ip for eth0 but the Guest is unreachable "over eth0" eth0 Link encap:Ethernet HWaddr 08:00:27:d7:2f:7b inet addr:46.4.xx.xx Bcast:46.4.xx.xx Mask:255.255.255.248 inet6 addr: fe80::a00:27ff:fed7:2f7b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:21 errors:0 dropped:0 overruns:0 frame:0 TX packets:69 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1260 (1.2 KiB) TX bytes:3114 (3.0 KiB) eth1 Link encap:Ethernet HWaddr 08:00:27:b0:3b:75 inet addr:192.168.10.3 Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:feb0:3b75/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:142 errors:0 dropped:0 overruns:0 frame:0 TX packets:92 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:15962 (15.5 KiB) TX bytes:14540 (14.1 KiB) Interrupt:16 Base address:0xd240 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:123 errors:0 dropped:0 overruns:0 frame:0 TX packets:123 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:25156 (24.5 KiB) TX bytes:25156 (24.5 KiB)

    Read the article

  • Trouble with site-to-site OpenVPN & pfSense not passing traffic

    - by JohnCC
    I'm trying to get an OpenVPN tunnel going on pfSense 1.2.3-RELEASE running on embedded routers. I have a local LAN 10.34.43.0/254. The remote LAN is 10.200.1.0/24. The local pfSense is configured as the client, and the remote is configured as the server. My OpenVPN tunnel is using the IP range 10.99.89.0/24 internally. There are also some additional LANs on the remote side routed through the tunnel, but the issue is not with those since my connectivity fails before that point in the chain. The tunnel comes up fine and the logs look healthy. What I find is this:- I can ping and telnet to the remote LAN and the additional remote LANs from the local pfSense box's shell. I cannot ping or telnet to any remote LANs from the local network. I cannot ping or telnet to the local network from the remote LAN or the remote pfSense box's shell. If I tcpdump the tun interfaces on both sides and ping from the local LAN, I see the packets hit the tunnel locally, but they do not appear on the remote side (nor do they appear on the remote LAN interface if I tcpdump that). If I tcpdump the tun interfaces on both sides and ping from the local pfSense shell, I see the packets hit the tunnel locally, and exit the remote side. I can also tcpdump the remote LAN interface and see them pass there too. If I tcpdump the tun interfaces on both sides and ping from the remote pfSense shell, I see the packets hit the remote tun but they do not emerge from the local one. Here is the config file the remote side is using:- #user nobody #group nobody daemon keepalive 10 60 ping-timer-rem persist-tun persist-key dev tun proto udp cipher BF-CBC up /etc/rc.filter_configure down /etc/rc.filter_configure server 10.99.89.0 255.255.255.0 client-config-dir /var/etc/openvpn_csc push "route 10.200.1.0 255.255.255.0" lport <port> route 10.34.43.0 255.255.255.0 ca /var/etc/openvpn_server0.ca cert /var/etc/openvpn_server0.cert key /var/etc/openvpn_server0.key dh /var/etc/openvpn_server0.dh comp-lzo push "route 205.217.5.128 255.255.255.224" push "route 205.217.5.64 255.255.255.224" push "route 165.193.147.128 255.255.255.224" push "route 165.193.147.32 255.255.255.240" push "route 192.168.1.16 255.255.255.240" push "route 192.168.2.16 255.255.255.240" Here is the local config:- writepid /var/run/openvpn_client0.pid #user nobody #group nobody daemon keepalive 10 60 ping-timer-rem persist-tun persist-key dev tun proto udp cipher BF-CBC up /etc/rc.filter_configure down /etc/rc.filter_configure remote <host> <port> client lport 1194 ifconfig 10.99.89.2 10.99.89.1 ca /var/etc/openvpn_client0.ca cert /var/etc/openvpn_client0.cert key /var/etc/openvpn_client0.key comp-lzo You can see the relevant parts of the routing tables extracted from pfSense here http://pastie.org/5365800 The local firewall permits all ICMP from the LAN, and my PC is allowed everything to anywhere. The remote firewall treats its LAN as trusted and permits all traffic on that interface. Can anyone suggest why this is not working, and what I could try next?

    Read the article

  • Cannot get official CentOS 5.4 BIND package to start

    - by Brian Cline
    Yesterday I installed CentOS 5.4 on one of my servers, and it appears that the official BIND/named package has trouble starting for reasons I cannot deduce. Here is what happens: [root@hal init.d]# service named start Starting named: Error in named configuration: /etc/named.conf:57: open: named.root.hints: permission denied [FAILED] The line in question, with the directory option for context: // further up in the file: directory "/var/named"; // line 57: include "named.root.hints"; Like you, my first reaction was to check permissions on /var/named/named.root.hints, /var/named, and /var to make sure the named user would be able to read it. Here are the permissions at each level: drwxr-xr-x 19 root root 4096 Nov 3 02:05 var drwxr-x--- 5 root named 4096 Nov 3 02:36 named -rw-r--r-- 1 named named 524 Mar 29 2006 named.root.hints Everything appears to be fine permission-wise. The same error occurs if the /var/named directory is writable by the named user. I've even temporarily allowed the named user to log in via bash, su'ed from root to named, and checked that I was, in fact, able to cat /var/named/named.root.hints successfully. (Yes, don't worry: I changed the shell back to nologin). My last endeavor showed that BIND is able to run under the named user account and start up just fine, if done so manually: [root@hal ~]# named -u named -g 03-Nov-2009 16:31:02.021 starting BIND 9.3.6-P1-RedHat-9.3.6-4.P1.el5 -u named -g 03-Nov-2009 16:31:02.021 adjusted limit on open files from 1024 to 1048576 03-Nov-2009 16:31:02.021 found 2 CPUs, using 2 worker threads 03-Nov-2009 16:31:02.021 using up to 4096 sockets 03-Nov-2009 16:31:02.028 loading configuration from '/etc/named.conf' 03-Nov-2009 16:31:02.030 using default UDP/IPv4 port range: [1024, 65535] 03-Nov-2009 16:31:02.031 using default UDP/IPv6 port range: [1024, 65535] 03-Nov-2009 16:31:02.034 listening on IPv4 interface lo, 127.0.0.1#53 03-Nov-2009 16:31:02.034 listening on IPv4 interface eth0, 10.0.0.5#53 03-Nov-2009 16:31:02.034 listening on IPv4 interface eth1, ww.xx.yy.zz#53 03-Nov-2009 16:31:02.040 command channel listening on 127.0.0.1#953 03-Nov-2009 16:31:02.040 command channel listening on ::1#953 03-Nov-2009 16:31:02.040 ignoring config file logging statement due to -g option 03-Nov-2009 16:31:02.041 zone 0.in-addr.arpa/IN/localhost_resolver: loaded serial 42 03-Nov-2009 16:31:02.042 zone 0.0.127.in-addr.arpa/IN/localhost_resolver: loaded serial 1997022700 03-Nov-2009 16:31:02.042 zone 255.in-addr.arpa/IN/localhost_resolver: loaded serial 42 03-Nov-2009 16:31:02.042 zone 0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa/IN/localhost_resolver: loaded serial 1997022700 03-Nov-2009 16:31:02.043 zone localdomain/IN/localhost_resolver: loaded serial 42 03-Nov-2009 16:31:02.043 zone localhost/IN/localhost_resolver: loaded serial 42 03-Nov-2009 16:31:02.043 zone x.y.z.in-addr.arpa/IN/internal: loaded serial 1 03-Nov-2009 16:31:02.044 zone x.y.z/IN/internal: loaded serial 2 03-Nov-2009 16:31:02.045 running What type and size of firearm should I use to resolve this? I'd prefer something with automatic ammunition, and, at worst, it should be able to fit on my shoulder. Of course I am open to suggestions.

    Read the article

  • Linux Kernel not passing through multicast UDP packets

    - by buecking
    Recently I've set up a new Ubuntu Server 10.04 and noticed my UDP server is no longer able to see any multicast data sent to the interface, even after joining the multicast group. I've got the exact same set up on two other Ubuntu 8.04.4 LTS machines and there is no problem receiving data after joining the same multicast group. The ethernet card is a Broadcom netXtreme II BCM5709 and the driver used is: b $ ethtool -i eth1 driver: bnx2 version: 2.0.2 firmware-version: 5.0.11 NCSI 2.0.5 bus-info: 0000:01:00.1 I'm using smcroute to manage my multicast registrations. b$ smcroute -d b$ smcroute -j eth1 233.37.54.71 After joining the group ip maddr shows the newly added registration. b$ ip maddr 1: lo inet 224.0.0.1 inet6 ff02::1 2: eth0 link 33:33:ff:40:c6:ad link 01:00:5e:00:00:01 link 33:33:00:00:00:01 inet 224.0.0.1 inet6 ff02::1:ff40:c6ad inet6 ff02::1 3: eth1 link 01:00:5e:25:36:47 link 01:00:5e:25:36:3e link 01:00:5e:25:36:3d link 33:33:ff:40:c6:af link 01:00:5e:00:00:01 link 33:33:00:00:00:01 inet 233.37.54.71 <------- McastGroup. inet 224.0.0.1 inet6 ff02::1:ff40:c6af inet6 ff02::1 So far so good, I can see that I'm receiving data for this multicast group. b$ sudo tcpdump -i eth1 -s 65534 host 233.37.54.71 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65534 bytes 09:30:09.924337 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212 09:30:09.947547 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212 09:30:10.108378 IP 192.164.1.120.58866 > 233.37.54.71.15574: UDP, length 268 09:30:10.196841 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212 ... I can also confirm that the interface is receiving mcast packets. b $ ethtool -S eth1 | grep mcast_pack rx_mcast_packets: 103998 tx_mcast_packets: 33 Now here's the problem. When I try to capture the traffic using a simple ruby UDP server I receive zero data! Here's a simple server that reads data send on port 15572 and prints the first two characters. This works on the two 8.04.4 Ubuntu Servers, but not the 10.04 server. require 'socket' s = UDPSocket.new s.bind("", 15572) 5.times do text, sender = s.recvfrom(2) puts text end If I send a UDP packet crafted in ruby to localhost, the server receives it and prints out the first two characters. So I know that the server above is working correctly. irb(main):001:0> require 'socket' => true irb(main):002:0> s = UDPSocket.new => #<UDPSocket:0x7f3ccd6615f0> irb(main):003:0> s.send("I2 XXX", 0, 'localhost', 15572) When I check the protocol statistics I see that InMcastPkts is not increasing. While on the other 8.04 servers, on the same network, received a few thousands packets in 10 seconds. b $ netstat -sgu ; sleep 10 ; netstat -sgu IcmpMsg: InType3: 11 OutType3: 11 Udp: 446 packets received 4 packets to unknown port received. 0 packet receive errors 461 packets sent UdpLite: IpExt: InMcastPkts: 4654 <--------- Same as below OutMcastPkts: 3426 InBcastPkts: 9854 InOctets: -1691733021 OutOctets: 51187936 InMcastOctets: 145207 OutMcastOctets: 109680 InBcastOctets: 1246341 IcmpMsg: InType3: 11 OutType3: 11 Udp: 446 packets received 4 packets to unknown port received. 0 packet receive errors 461 packets sent UdpLite: IpExt: InMcastPkts: 4656 <-------------- Same as above OutMcastPkts: 3427 InBcastPkts: 9854 InOctets: -1690886265 OutOctets: 51188788 InMcastOctets: 145267 OutMcastOctets: 109712 InBcastOctets: 1246341 If I try forcing the interface into promisc mode nothing changes. At this point I'm stuck. I've confirmed the kernel config has multicast enabled. Perhaps there are other config options I should be checking? b $ grep CONFIG_IP_MULTICAST /boot/config-2.6.32-23-server CONFIG_IP_MULTICAST=y Any thoughts on where to go from here?

    Read the article

  • Can't communicate between lan ports on openwrt router

    - by ScaryAardvark
    I've got a WBMR-HP-G300H Buffalo Airstation router on which I've installed the lates OpenWRT software. All is working well (ADSL, WIFI etc) except for one niggle. I can't communicate between lan ports. i.e. if I have one computer connected on lan port 1 and I try to ping another computer on lan port 2 then I get "destination unreachable". I can ping both computers from the router itself and can also ping each computer from a seperate laptop connected wirelessly. All computers are in the same subnet range (10.0.0.?/24). I suspect that I may need to configure a vlan on the switch but everytime I try and do this with various google'ed configuration I keep freezing out all lan-ports and I have to revert back using a wirelessly connected laptop. Here's my /etc/config/network: config interface 'loopback' option ifname 'lo' option proto 'static' option ipaddr '127.0.0.1' option netmask '255.0.0.0' config interface 'lan' option type 'bridge' option proto 'static' option netmask '255.255.255.0' option ipaddr '10.0.0.1' option _orig_ifname 'eth0 wlan0' option _orig_bridge 'true' option ifname 'eth0' config adsl-device 'adsl' option fwannex 'a' option annex 'a2p' config interface 'wan' option _orig_ifname 'nas0' option _orig_bridge 'false' option proto 'pppoa' option encaps 'vc' option atmdev '0' option vci '38' option vpi '0' option username '?????????????' option password '??????????????' Any help would be warmly received. Here's some more config stuff. root@OpenWrt:~# ifconfig -a br-lan Link encap:Ethernet HWaddr 00:24:A5:BD:66:08 inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:226576 errors:0 dropped:346 overruns:0 frame:0 TX packets:269292 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:26771676 (25.5 MiB) TX bytes:183986450 (175.4 MiB) eth0 Link encap:Ethernet HWaddr 00:24:A5:BD:66:08 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ifb0 Link encap:Ethernet HWaddr 36:60:EC:DF:13:A1 BROADCAST NOARP MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ifb1 Link encap:Ethernet HWaddr 4A:7B:75:67:54:E0 BROADCAST NOARP MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:780 errors:0 dropped:0 overruns:0 frame:0 TX packets:780 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:58369 (57.0 KiB) TX bytes:58369 (57.0 KiB) mon.wlan0 Link encap:UNSPEC HWaddr 00-24-A5-BD-66-08-00-48-00-00-00-00-00-00-00-00 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2424 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:320188 (312.6 KiB) TX bytes:0 (0.0 B) pppoa-wan Link encap:Point-to-Point Protocol inet addr:81.136.179.204 P-t-P:81.134.80.1 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:258894 errors:0 dropped:0 overruns:0 frame:0 TX packets:212976 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:177341656 (169.1 MiB) TX bytes:25192459 (24.0 MiB) wlan0 Link encap:Ethernet HWaddr 00:24:A5:BD:66:08 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:204063 errors:0 dropped:0 overruns:0 frame:0 TX packets:245516 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:26613140 (25.3 MiB) TX bytes:162799765 (155.2 MiB) root@OpenWrt:~# brctl show bridge name bridge id STP enabled interfaces br-lan 8000.0024a5bd6608 no wlan0 eth0 root@OpenWrt:~# swconfig dev eth0 show Global attributes: enable_vlan: 0 Port 0: pvid: 0 link: port:0 link:up speed:1000baseT full-duplex txflow rxflow Port 1: pvid: 0 link: port:1 link:down Port 2: pvid: 0 link: port:2 link:down Port 3: pvid: 0 link: port:3 link:down Port 4: pvid: 0 link: port:4 link:up speed:100baseT full-duplex txflow rxflow auto Port 5: pvid: 0 link: port:5 link:up speed:100baseT full-duplex txflow rxflow auto Regards Mark.

    Read the article

  • How to use Public IP in case of two ISP when two differs from each other

    - by user1471995
    Please bare with my long explanation but this is important to explain the actual problem. Please also pardon my knowledge with PFsense as i am new to this. I have single PFSense box with 3 Ethernet adapter. Before moving to configuration for these, i want to let you know i have two Ethernet based Internet Leased Line Connectivity let's call them ISP A and ISP B. Then last inetrface is LAN which is connected to network switch. Typical network diagram ISP A ----- PFSense ----> Switch ---- > Servers ISP B ----- ISP A (Initially Purchased) WAN IP:- 113.193.X.X /29 Gateway IP :- 113.193.X.A and other 4 usable public IP in same subnet(So the gateway for those IP are also same). ISP B (Recently Purchased) WAN IP:- 115.115.X.X /30 Gateway IP :- 115.115.X.B and other 5 usable public IP in different subnet(So the gateway for those IP is different), for example if 115.119.X.X2 is one of the IP from that list then the gateway for this IP is 115.119.X.X1. Configuration for 3 Interfaces Interface : WAN Network Port : nfe0 Type : Static IP Address : 113.193.X.X /29 Gateway : 113.193.X.A Interface : LAN Network Port : vr0 Type : Static IP Address : 192.168.1.1 /24 Gateway : None Interface : RELWAN Network Port : rl0 Type : Static IP Address : 115.115.X.X /30 (I am not sure of the subnet) Gateway : 115.115.X.B To use Public IP from ISP A i have done following steps a) Created Virtual IP using either ARP or IP Alias. b) Using Firewall: NAT: Port Forward i have created specific natting from one public IP to my internal Lan private IP for example :- WAN TCP/UDP * * 113.193.X.X1 53 (DNS) 192.168.1.5 53 (DNS) WAN TCP/UDP * * 113.193.X.X1 80 (HTTP) 192.168.1.5 80 (HTTP) WAN TCP * * 113.193.X.X2 80 (HTTP) 192.168.1.7 80 (HTTP) etc., c) Current state for Firewall: NAT: Outbound is Manual and whatever default rule are defined for the WAN those are only present. d) If this section in relevant then for Firewall: Rules at WAN tab then following default rule has been generated. * RFC 1918 networks * * * * * Block private networks * Reserved/not assigned by IANA * * * * * * To use Public IP from ISP B i have done following steps a) Created Virtual IP using either ARP or IP Alias. b) Using Firewall: NAT: Port Forward i have created specific natting from one public IP to my internal Lan private IP for example :- RELWAN TCP/UDP * * 115.119.116.X.X1 80 (HTTP) 192.168.1.11 80 (HTTP) c) Current state for Firewall: NAT: Outbound is Manual and whatever default rule are defined for the RELWAN those are only present. d) If this section in relevant then for Firewall: Rules at RELWAN tab then following default rule has been generated. * RFC 1918 networks * * * * * * Reserved/not assigned by IANA * * * * * * Last thing before my actual query is to make you aware that to have multiple Wan setup i have done following steps a) Under System: Gateways at Groups Tab i have created new group as following MultipleGateway WANGW, RELWAN Tier 2,Tier 1 Multiple Gateway Test b) Then Under Firewall: Rules at LAN tab i have created a rule for internal traffic as follows * LAN net * * * MultipleGateway none c) This setup works if unplug first ISP traffic start routing using ISP 2 and vice-versa. Now my main query and problem is i am not able to use public IP address allocated by ISP B, i have tried many small tweaks but not successful in anyone. The notable difference between the two ISP is a) In case of ISP A there Public usable IP address are on same subnet so the gateway used for the WAN ip is same for the other public IP address. b) In case of ISP B there public usable IP address are on different subnet so the obvious the gateway IP for them is different from WAN gateway's IP. Please let me know how to use ISP B public usable IP address, in future also i am going to rely for more IPs from ISP B only.

    Read the article

  • Specifying a Postfix Instance to send outbound email

    - by Catherine Jefferson
    I have a CentOS 6.5 server running Postfix 2.6x (the default distribution) with five public IPv4 IPs bound to it. Each IP has DNS and rDNS set separately. Each uses a different hostname at a different domain. I have five Postfix instances, one bound to each IP, like this example: 192.168.34.104 red.example.com /etc/postfix 192.168.36.48 green.example.net /etc/postfix-green 192.168.36.49 pink.example.org /etc/postfix-pink 192.168.36.50 orange.example.info /etc/postfix-orange 192.168.36.51 blue.example.us /etc/postfix-blue I've tested each IP by telneting to port 25. Postfix answers and banners properly with the correct hostname. Email is received on all of these instances with no problems and is routed to the correct place. This setup, minus the final instance, has existed for a couple of years and works. I never bothered to set up outbound email to go through any but the main instance, however; there was no need. Now I need to send email from blue.example.us that actually leaves from that interface and IP, such that the Received headers show blue.example.us as the sending mailhost, so that SPF and DKIM validate, etc etc. The email that will be sent from blue.example.com is a feedback loop sent by a single shell account on the server (account5), an account that is dedicated to sending this email. The account receives the feedback loop emails from servers on other networks, saves the bodies of those emails, and then generates a new outbound email header, appends the saved body, and sends the email. It's sending by piping each email to sendmail -oi -t. We're doing it this way to mask the identities of the initial servers. The procmail script that processes these emails works correctly. However, I cannot configure this account to send email through the proper Postfix instance/IP/interface. The exact same account and script sends email through the main Postfix instance /etc/postfix without any issues. When I change MAIL_CONFIG to point to /etc/postfix-blue in either .bash_profile or the Procmail script that handles this email, though, I get this error: sendmail: fatal: User account5(###) is not allowed to submit mail I've read the manuals on Postfix.org, searched Google, and tried the suggestions in three previous answers here on ServerFault.com: Postfix - specify interface to deliver outbound mail on Postfix user is not allowed to submit mail Postfix rejects php mails I have been careful to stop and restart Postfix after each configuration change, and tested the results. Nothing has worked. The main postfix instance happily accepts outbound email from account5. The postfix-blue instance continues to reject email from account5 with the sendmail error above. As tempting as it is to blame machine hostility, I know that I must be missing something or doing something wrong. Does anybody have any suggestions as to what it might be? Please feel free to ask for further information about my setup if you need it. =-=-=-=-=-=-=-=-=-= At the request of the responder, here are main.cf and master.cf for a) the main postfix instance ("red.example.com") and b) the FBL instance ("blue.example.us") [NOTE: All parameters not specified below were left at the default Postfix 2.6 settings] MAIN: master.cf smtp inet n - n - - smtpd main.cf myhostname = red.example.com mydomain = example.com inet_interfaces = $myhostname, localhost inet_protocols = all lmtp_host_lookup = native smtp_host_lookup = native ignore_mx_lookup_error = yes mydestination = $myhostname, localhost.$mydomain, localhost local_recipient_maps = mynetworks = 192.168.34.104/32 relay_domains = example.com, example.info, example.net, example.org, example.us relayhost = [192.168.34.102] # Separate physical server, main mailserver. relay_recipient_maps = hash:/etc/postfix/relay_recipients alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases smtpd_banner = $myhostname ESMTP $mail_name multi_instance_wrapper = ${command_directory}/postmulti -p -- multi_instance_enable = yes multi_instance_directories = /etc/postfix-green /etc/postfix-pink /etc/postfix-orange /etc/postfix-blue FBL: master.cf 184.173.119.103:25 inet n - n - - smtpd main.cf myhostname = blue.example.us mydomain = blue.example.us <= Deliberately set to subdomain only. myorigin = $mydomain inet_interfaces = $myhostname lmtp_host_lookup = native smtp_host_lookup = native ignore_mx_lookup_error = yes mydestination = $myhostname local_recipient_maps = unix:passwd.byname $alias_maps $virtual_alias_maps mynetworks = 192.168.36.51/32, 192.168.35.20/31 <= Second IP is backup MX servers relay_domains = $mydestination recipient_canonical_maps = hash:/etc/postfix-blue/canonical virtual_alias_maps = hash:/etc/postfix-fbl/virtual alias_maps = hash:/etc/aliases, hash:/etc/postfix-blue/canonical alias_maps = hash:/etc/aliases, hash:/etc/postfix-blue/canonical mailbox_command = /usr/bin/procmail -a "$EXTENSION" DEFAULT=$HOME/Mail/ MAILDIR=$HOME/Mail smtpd_banner = $myhostname ESMTP $mail_name authorized_submit_users = multi_instance_name = postfix-blue multi_instance_enable = yes

    Read the article

  • Creating a dynamic, extensible C# Expando Object

    - by Rick Strahl
    I love dynamic functionality in a strongly typed language because it offers us the best of both worlds. In C# (or any of the main .NET languages) we now have the dynamic type that provides a host of dynamic features for the static C# language. One place where I've found dynamic to be incredibly useful is in building extensible types or types that expose traditionally non-object data (like dictionaries) in easier to use and more readable syntax. I wrote about a couple of these for accessing old school ADO.NET DataRows and DataReaders more easily for example. These classes are dynamic wrappers that provide easier syntax and auto-type conversions which greatly simplifies code clutter and increases clarity in existing code. ExpandoObject in .NET 4.0 Another great use case for dynamic objects is the ability to create extensible objects - objects that start out with a set of static members and then can add additional properties and even methods dynamically. The .NET 4.0 framework actually includes an ExpandoObject class which provides a very dynamic object that allows you to add properties and methods on the fly and then access them again. For example with ExpandoObject you can do stuff like this:dynamic expand = new ExpandoObject(); expand.Name = "Rick"; expand.HelloWorld = (Func<string, string>) ((string name) => { return "Hello " + name; }); Console.WriteLine(expand.Name); Console.WriteLine(expand.HelloWorld("Dufus")); Internally ExpandoObject uses a Dictionary like structure and interface to store properties and methods and then allows you to add and access properties and methods easily. As cool as ExpandoObject is it has a few shortcomings too: It's a sealed type so you can't use it as a base class It only works off 'properties' in the internal Dictionary - you can't expose existing type data It doesn't serialize to XML or with DataContractSerializer/DataContractJsonSerializer Expando - A truly extensible Object ExpandoObject is nice if you just need a dynamic container for a dictionary like structure. However, if you want to build an extensible object that starts out with a set of strongly typed properties and then allows you to extend it, ExpandoObject does not work because it's a sealed class that can't be inherited. I started thinking about this very scenario for one of my applications I'm building for a customer. In this system we are connecting to various different user stores. Each user store has the same basic requirements for username, password, name etc. But then each store also has a number of extended properties that is available to each application. In the real world scenario the data is loaded from the database in a data reader and the known properties are assigned from the known fields in the database. All unknown fields are then 'added' to the expando object dynamically. In the past I've done this very thing with a separate property - Properties - just like I do for this class. But the property and dictionary syntax is not ideal and tedious to work with. I started thinking about how to represent these extra property structures. One way certainly would be to add a Dictionary, or an ExpandoObject to hold all those extra properties. But wouldn't it be nice if the application could actually extend an existing object that looks something like this as you can with the Expando object:public class User : Westwind.Utilities.Dynamic.Expando { public string Email { get; set; } public string Password { get; set; } public string Name { get; set; } public bool Active { get; set; } public DateTime? ExpiresOn { get; set; } } and then simply start extending the properties of this object dynamically? Using the Expando object I describe later you can now do the following:[TestMethod] public void UserExampleTest() { var user = new User(); // Set strongly typed properties user.Email = "[email protected]"; user.Password = "nonya123"; user.Name = "Rickochet"; user.Active = true; // Now add dynamic properties dynamic duser = user; duser.Entered = DateTime.Now; duser.Accesses = 1; // you can also add dynamic props via indexer user["NickName"] = "AntiSocialX"; duser["WebSite"] = "http://www.west-wind.com/weblog"; // Access strong type through dynamic ref Assert.AreEqual(user.Name,duser.Name); // Access strong type through indexer Assert.AreEqual(user.Password,user["Password"]); // access dyanmically added value through indexer Assert.AreEqual(duser.Entered,user["Entered"]); // access index added value through dynamic Assert.AreEqual(user["NickName"],duser.NickName); // loop through all properties dynamic AND strong type properties (true) foreach (var prop in user.GetProperties(true)) { object val = prop.Value; if (val == null) val = "null"; Console.WriteLine(prop.Key + ": " + val.ToString()); } } As you can see this code somewhat blurs the line between a static and dynamic type. You start with a strongly typed object that has a fixed set of properties. You can then cast the object to dynamic (as I discussed in my last post) and add additional properties to the object. You can also use an indexer to add dynamic properties to the object. To access the strongly typed properties you can use either the strongly typed instance, the indexer or the dynamic cast of the object. Personally I think it's kinda cool to have an easy way to access strongly typed properties by string which can make some data scenarios much easier. To access the 'dynamically added' properties you can use either the indexer on the strongly typed object, or property syntax on the dynamic cast. Using the dynamic type allows all three modes to work on both strongly typed and dynamic properties. Finally you can iterate over all properties, both dynamic and strongly typed if you chose. Lots of flexibility. Note also that by default the Expando object works against the (this) instance meaning it extends the current object. You can also pass in a separate instance to the constructor in which case that object will be used to iterate over to find properties rather than this. Using this approach provides some really interesting functionality when use the dynamic type. To use this we have to add an explicit constructor to the Expando subclass:public class User : Westwind.Utilities.Dynamic.Expando { public string Email { get; set; } public string Password { get; set; } public string Name { get; set; } public bool Active { get; set; } public DateTime? ExpiresOn { get; set; } public User() : base() { } // only required if you want to mix in seperate instance public User(object instance) : base(instance) { } } to allow the instance to be passed. When you do you can now do:[TestMethod] public void ExpandoMixinTest() { // have Expando work on Addresses var user = new User( new Address() ); // cast to dynamicAccessToPropertyTest dynamic duser = user; // Set strongly typed properties duser.Email = "[email protected]"; user.Password = "nonya123"; // Set properties on address object duser.Address = "32 Kaiea"; //duser.Phone = "808-123-2131"; // set dynamic properties duser.NonExistantProperty = "This works too"; // shows default value Address.Phone value Console.WriteLine(duser.Phone); } Using the dynamic cast in this case allows you to access *three* different 'objects': The strong type properties, the dynamically added properties in the dictionary and the properties of the instance passed in! Effectively this gives you a way to simulate multiple inheritance (which is scary - so be very careful with this, but you can do it). How Expando works Behind the scenes Expando is a DynamicObject subclass as I discussed in my last post. By implementing a few of DynamicObject's methods you can basically create a type that can trap 'property missing' and 'method missing' operations. When you access a non-existant property a known method is fired that our code can intercept and provide a value for. Internally Expando uses a custom dictionary implementation to hold the dynamic properties you might add to your expandable object. Let's look at code first. The code for the Expando type is straight forward and given what it provides relatively short. Here it is.using System; using System.Collections.Generic; using System.Linq; using System.Dynamic; using System.Reflection; namespace Westwind.Utilities.Dynamic { /// <summary> /// Class that provides extensible properties and methods. This /// dynamic object stores 'extra' properties in a dictionary or /// checks the actual properties of the instance. /// /// This means you can subclass this expando and retrieve either /// native properties or properties from values in the dictionary. /// /// This type allows you three ways to access its properties: /// /// Directly: any explicitly declared properties are accessible /// Dynamic: dynamic cast allows access to dictionary and native properties/methods /// Dictionary: Any of the extended properties are accessible via IDictionary interface /// </summary> [Serializable] public class Expando : DynamicObject, IDynamicMetaObjectProvider { /// <summary> /// Instance of object passed in /// </summary> object Instance; /// <summary> /// Cached type of the instance /// </summary> Type InstanceType; PropertyInfo[] InstancePropertyInfo { get { if (_InstancePropertyInfo == null && Instance != null) _InstancePropertyInfo = Instance.GetType().GetProperties(BindingFlags.Instance | BindingFlags.Public | BindingFlags.DeclaredOnly); return _InstancePropertyInfo; } } PropertyInfo[] _InstancePropertyInfo; /// <summary> /// String Dictionary that contains the extra dynamic values /// stored on this object/instance /// </summary> /// <remarks>Using PropertyBag to support XML Serialization of the dictionary</remarks> public PropertyBag Properties = new PropertyBag(); //public Dictionary<string,object> Properties = new Dictionary<string, object>(); /// <summary> /// This constructor just works off the internal dictionary and any /// public properties of this object. /// /// Note you can subclass Expando. /// </summary> public Expando() { Initialize(this); } /// <summary> /// Allows passing in an existing instance variable to 'extend'. /// </summary> /// <remarks> /// You can pass in null here if you don't want to /// check native properties and only check the Dictionary! /// </remarks> /// <param name="instance"></param> public Expando(object instance) { Initialize(instance); } protected virtual void Initialize(object instance) { Instance = instance; if (instance != null) InstanceType = instance.GetType(); } /// <summary> /// Try to retrieve a member by name first from instance properties /// followed by the collection entries. /// </summary> /// <param name="binder"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; // first check the Properties collection for member if (Properties.Keys.Contains(binder.Name)) { result = Properties[binder.Name]; return true; } // Next check for Public properties via Reflection if (Instance != null) { try { return GetProperty(Instance, binder.Name, out result); } catch { } } // failed to retrieve a property result = null; return false; } /// <summary> /// Property setter implementation tries to retrieve value from instance /// first then into this object /// </summary> /// <param name="binder"></param> /// <param name="value"></param> /// <returns></returns> public override bool TrySetMember(SetMemberBinder binder, object value) { // first check to see if there's a native property to set if (Instance != null) { try { bool result = SetProperty(Instance, binder.Name, value); if (result) return true; } catch { } } // no match - set or add to dictionary Properties[binder.Name] = value; return true; } /// <summary> /// Dynamic invocation method. Currently allows only for Reflection based /// operation (no ability to add methods dynamically). /// </summary> /// <param name="binder"></param> /// <param name="args"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { if (Instance != null) { try { // check instance passed in for methods to invoke if (InvokeMethod(Instance, binder.Name, args, out result)) return true; } catch { } } result = null; return false; } /// <summary> /// Reflection Helper method to retrieve a property /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="result"></param> /// <returns></returns> protected bool GetProperty(object instance, string name, out object result) { if (instance == null) instance = this; var miArray = InstanceType.GetMember(name, BindingFlags.Public | BindingFlags.GetProperty | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0]; if (mi.MemberType == MemberTypes.Property) { result = ((PropertyInfo)mi).GetValue(instance,null); return true; } } result = null; return false; } /// <summary> /// Reflection helper method to set a property value /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="value"></param> /// <returns></returns> protected bool SetProperty(object instance, string name, object value) { if (instance == null) instance = this; var miArray = InstanceType.GetMember(name, BindingFlags.Public | BindingFlags.SetProperty | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0]; if (mi.MemberType == MemberTypes.Property) { ((PropertyInfo)mi).SetValue(Instance, value, null); return true; } } return false; } /// <summary> /// Reflection helper method to invoke a method /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="args"></param> /// <param name="result"></param> /// <returns></returns> protected bool InvokeMethod(object instance, string name, object[] args, out object result) { if (instance == null) instance = this; // Look at the instanceType var miArray = InstanceType.GetMember(name, BindingFlags.InvokeMethod | BindingFlags.Public | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0] as MethodInfo; result = mi.Invoke(Instance, args); return true; } result = null; return false; } /// <summary> /// Convenience method that provides a string Indexer /// to the Properties collection AND the strongly typed /// properties of the object by name. /// /// // dynamic /// exp["Address"] = "112 nowhere lane"; /// // strong /// var name = exp["StronglyTypedProperty"] as string; /// </summary> /// <remarks> /// The getter checks the Properties dictionary first /// then looks in PropertyInfo for properties. /// The setter checks the instance properties before /// checking the Properties dictionary. /// </remarks> /// <param name="key"></param> /// /// <returns></returns> public object this[string key] { get { try { // try to get from properties collection first return Properties[key]; } catch (KeyNotFoundException ex) { // try reflection on instanceType object result = null; if (GetProperty(Instance, key, out result)) return result; // nope doesn't exist throw; } } set { if (Properties.ContainsKey(key)) { Properties[key] = value; return; } // check instance for existance of type first var miArray = InstanceType.GetMember(key, BindingFlags.Public | BindingFlags.GetProperty); if (miArray != null && miArray.Length > 0) SetProperty(Instance, key, value); else Properties[key] = value; } } /// <summary> /// Returns and the properties of /// </summary> /// <param name="includeProperties"></param> /// <returns></returns> public IEnumerable<KeyValuePair<string,object>> GetProperties(bool includeInstanceProperties = false) { if (includeInstanceProperties && Instance != null) { foreach (var prop in this.InstancePropertyInfo) yield return new KeyValuePair<string, object>(prop.Name, prop.GetValue(Instance, null)); } foreach (var key in this.Properties.Keys) yield return new KeyValuePair<string, object>(key, this.Properties[key]); } /// <summary> /// Checks whether a property exists in the Property collection /// or as a property on the instance /// </summary> /// <param name="item"></param> /// <returns></returns> public bool Contains(KeyValuePair<string, object> item, bool includeInstanceProperties = false) { bool res = Properties.ContainsKey(item.Key); if (res) return true; if (includeInstanceProperties && Instance != null) { foreach (var prop in this.InstancePropertyInfo) { if (prop.Name == item.Key) return true; } } return false; } } } Although the Expando class supports an indexer, it doesn't actually implement IDictionary or even IEnumerable. It only provides the indexer and Contains() and GetProperties() methods, that work against the Properties dictionary AND the internal instance. The reason for not implementing IDictionary is that a) it doesn't add much value since you can access the Properties dictionary directly and that b) I wanted to keep the interface to class very lean so that it can serve as an entity type if desired. Implementing these IDictionary (or even IEnumerable) causes LINQ extension methods to pop up on the type which obscures the property interface and would only confuse the purpose of the type. IDictionary and IEnumerable are also problematic for XML and JSON Serialization - the XML Serializer doesn't serialize IDictionary<string,object>, nor does the DataContractSerializer. The JavaScriptSerializer does serialize, but it treats the entire object like a dictionary and doesn't serialize the strongly typed properties of the type, only the dictionary values which is also not desirable. Hence the decision to stick with only implementing the indexer to support the user["CustomProperty"] functionality and leaving iteration functions to the publicly exposed Properties dictionary. Note that the Dictionary used here is a custom PropertyBag class I created to allow for serialization to work. One important aspect for my apps is that whatever custom properties get added they have to be accessible to AJAX clients since the particular app I'm working on is a SIngle Page Web app where most of the Web access is through JSON AJAX calls. PropertyBag can serialize to XML and one way serialize to JSON using the JavaScript serializer (not the DCS serializers though). The key components that make Expando work in this code are the Properties Dictionary and the TryGetMember() and TrySetMember() methods. The Properties collection is public so if you choose you can explicitly access the collection to get better performance or to manipulate the members in internal code (like loading up dynamic values form a database). Notice that TryGetMember() and TrySetMember() both work against the dictionary AND the internal instance to retrieve and set properties. This means that user["Name"] works against native properties of the object as does user["Name"] = "RogaDugDog". What's your Use Case? This is still an early prototype but I've plugged it into one of my customer's applications and so far it's working very well. The key features for me were the ability to easily extend the type with values coming from a database and exposing those values in a nice and easy to use manner. I'm also finding that using this type of object for ViewModels works very well to add custom properties to view models. I suspect there will be lots of uses for this - I've been using the extra dictionary approach to extensibility for years - using a dynamic type to make the syntax cleaner is just a bonus here. What can you think of to use this for? Resources Source Code and Tests (GitHub) Also integrated in Westwind.Utilities of the West Wind Web Toolkit West Wind Utilities NuGet© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET  Dynamic Types   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Using Unity – Part 1

    - by nmarun
    I have been going through implementing some IoC pattern using Unity and so I decided to share my learnings (I know that’s not an English word, but you get the point). Ok, so I have an ASP.net project named ProductWeb and a class library called ProductModel. In the model library, I have a class called Product: 1: public class Product 2: { 3: public string Name { get; set; } 4: public string Description { get; set; } 5:  6: public Product() 7: { 8: Name = "iPad"; 9: Description = "Not just a reader!"; 10: } 11:  12: public string WriteProductDetails() 13: { 14: return string.Format("Name: {0} Description: {1}", Name, Description); 15: } 16: } In the Page_Load event of the default.aspx, I’ll need something like: 1: Product product = new Product(); 2: productDetailsLabel.Text = product.WriteProductDetails(); Now, let’s go ‘Unity’fy this application. I assume you have all the bits for the pattern. If not, get it from here. I found this schematic representation of Unity pattern from the above link. This image might not make much sense to you now, but as we proceed, things will get better. The first step to implement the Inversion of Control pattern is to create interfaces that your types will implement. An IProduct interface is added to the ProductModel project. 1: public interface IProduct 2: { 3: string WriteProductDetails(); 4: } Let’s make our Product class to implement the IProduct interface. The application will compile and run as before despite the changes made. Add the following references to your web project: Microsoft.Practices.Unity Microsoft.Practices.Unity.Configuration Microsoft.Practices.Unity.StaticFactory Microsoft.Practices.ObjectBuilder2 We need to add a few lines to the web.config file. The line below tells what version of Unity pattern we’ll be using. 1: <configSections> 2: <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration, Version=1.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> 3: </configSections> Add another block with the same name as the section name declared above – ‘unity’. 1: <unity> 2: <typeAliases> 3: <!--Custom object types--> 4: <typeAlias alias="IProduct" type="ProductModel.IProduct, ProductModel"/> 5: <typeAlias alias="Product" type="ProductModel.Product, ProductModel"/> 6: </typeAliases> 7: <containers> 8: <container name="unityContainer"> 9: <types> 10: <type type="IProduct" mapTo="Product"/> 11: </types> 12: </container> 13: </containers> 14: </unity> From the Unity Configuration schematic shown above, you see that the ‘unity’ block has a ‘typeAliases’ and a ‘containers’ segment. The typeAlias element gives a ‘short-name’ for a type. This ‘short-name’ can be used to point to this type any where in the configuration file (web.config in our case, but all this information could be coming from an external xml file as well). The container element holds all the mapping information. This container is referenced through its name attribute in the code and you can have multiple of these container elements in the containers segment. The ‘type’ element in line 10 basically says: ‘When Unity requests to resolve the alias IProduct, return an instance of whatever the short-name of Product points to’. This is the most basic piece of Unity pattern and all of this is accomplished purely through configuration. So, in future you have a change in your model, all you need to do is - implement IProduct on the new model class and - either add a typeAlias for the new type and point the mapTo attribute to the new alias declared - or modify the mapTo attribute of the type element to point to the new alias (as the case may be). Now for the calling code. It’s a good idea to store your unity container details in the Application cache, as this is rarely bound to change and also adds for better performance. The Global.asax.cs file comes for our rescue: 1: protected void Application_Start(object sender, EventArgs e) 2: { 3: // create and populate a new Unity container from configuration 4: IUnityContainer unityContainer = new UnityContainer(); 5: UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity"); 6: section.Containers["unityContainer"].Configure(unityContainer); 7: Application["UnityContainer"] = unityContainer; 8: } 9:  10: protected void Application_End(object sender, EventArgs e) 11: { 12: Application["UnityContainer"] = null; 13: } All this says is: create an instance of UnityContainer() and read the ‘unity’ section from the configSections segment of the web.config file. Then get the container named ‘unityContainer’ and store it in the Application cache. In my code-behind file, I’ll make use of this UnityContainer to create an instance of the Product type. 1: public partial class _Default : Page 2: { 3: private IUnityContainer unityContainer; 4: protected void Page_Load(object sender, EventArgs e) 5: { 6: unityContainer = Application["UnityContainer"] as IUnityContainer; 7: if (unityContainer == null) 8: { 9: productDetailsLabel.Text = "ERROR: Unity Container not populated in Global.asax.<p />"; 10: } 11: else 12: { 13: IProduct productInstance = unityContainer.Resolve<IProduct>(); 14: productDetailsLabel.Text = productInstance.WriteProductDetails(); 15: } 16: } 17: } Looking the ‘else’ block, I’m asking the unityContainer object to resolve the IProduct type. All this does, is to look at the matching type in the container, read its mapTo attribute value, get the full name from the alias and create an instance of the Product class. Fabulous!! I’ll go more in detail in the next blog. The code for this blog can be found here.

    Read the article

  • Compiling examples for consuming the REST Endpoints for WCF Service using Agatha

    - by REA_ANDREW
    I recently made two contributions to the Agatha Project by Davy Brion over on Google Code, and one of the things I wanted to follow up with was a post showing examples and some, seemingly required tid bits.  The contributions which I made where: To support StructureMap To include REST (JSON and XML) support for the service contract The examples which I have made, I want to format them so they fit in with the current format of examples over on Agatha and hopefully create and submit a third patch which will include these examples to help others who wish to use these additions. Whilst building these examples for both XML and JSON I have learnt a couple of things which I feel are not really well documented, but are extremely good practice and once known make perfect sense.  I have chosen a real basic e-commerce context for my example Requests and Responses, and have also made use of the excellent tool AutoMapper, again on Google Code. Setting the scene I have followed the Pipes and Filters Pattern with the IQueryable interface on my Repository and exposed the following methods to query Products: IQueryable<Product> GetProducts(); IQueryable<Product> ByCategoryName(this IQueryable<Product> products, string categoryName) Product ByProductCode(this IQueryable<Product> products, String productCode) I have an interface for the IProductRepository but for the concrete implementation I have simply created a protected getter which populates a private List<Product> with 100 test products with random data.  Another good reason for following an interface based approach is that it will demonstrate usage of my first contribution which is the StructureMap support.  Finally the two Domain Objects I have made are Product and Category as shown below: public class Product { public String ProductCode { get; set; } public String Name { get; set; } public Decimal Price { get; set; } public Decimal Rrp { get; set; } public Category Category { get; set; } }   public class Category { public String Name { get; set; } }   Requirements for the REST Support One of the things which you will notice with Agatha is that you do not have to decorate your Request and Response objects with the WCF Service Model Attributes like DataContract, DataMember etc… Unfortunately from what I have seen, these are required if you want the same types to work with your REST endpoint.  I have not tried but I assume the same result can be achieved by simply decorating the same classes with the Serializable Attribute.  Without this the operation will fail. Another surprising thing I have found is that it did not work until I used the following Attribute parameters: Name Namespace e.g. [DataContract(Name = "GetProductsRequest", Namespace = "AgathaRestExample.Service.Requests")] public class GetProductsRequest : Request { }   Although I was surprised by this, things kind of explained themselves when I got round to figuring out the exact construct required for both the XML and the REST.  One of the things which you already know and are then reminded of is that each of your Requests and Responses ultimately inherit from an abstract base class respectively. This information needs to be represented in a way native to the format being used.  I have seen this in XML but I have not seen the format which is required for the JSON. JSON Consumer Example I have used JQuery to create the example and I simply want to make two requests to the server which as you will know with Agatha are transmitted inside an array to reduce the service calls.  I have also used a tool called json2 which is again over at Google Code simply to convert my JSON expression into its string format for transmission.  You will notice that I specify the type of Request I am using and the relevant Namespace it belongs to.  Also notice that the second request has a parameter so each of these two object are representing an abstract Request and the parameters of the object describe it. <script type="text/javascript"> var bodyContent = $.ajax({ url: "http://localhost:50348/service.svc/json/processjsonrequests", global: false, contentType: "application/json; charset=utf-8", type: "POST", processData: true, data: JSON.stringify([ { __type: "GetProductsRequest:AgathaRestExample.Service.Requests" }, { __type: "GetProductsByCategoryRequest:AgathaRestExample.Service.Requests", CategoryName: "Category1" } ]), dataType: "json", success: function(msg) { alert(msg); } }).responseText; </script>   XML Consumer Example For the XML Consumer example I have chosen to use a simple Console Application and make a WebRequest to the service using the XML as a request.  I have made a crude static method which simply reads from an XML File, replaces some value with a parameter and returns the formatted XML.  I say crude but it simply shows how XML Templates for each type of Request could be made and then have a wrapper utility in whatever language you use to combine the requests which are required.  The following XML is the same Request array as shown above but simply in the XML Format. <?xml version="1.0" encoding="utf-8" ?> <ArrayOfRequest xmlns="http://schemas.datacontract.org/2004/07/Agatha.Common" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Request i:type="a:GetProductsRequest" xmlns:a="AgathaRestExample.Service.Requests"/> <Request i:type="a:GetProductsByCategoryRequest" xmlns:a="AgathaRestExample.Service.Requests"> <a:CategoryName>{CategoryName}</a:CategoryName> </Request> </ArrayOfRequest>   It is funny because I remember submitting a question to StackOverflow asking whether there was a REST Client Generation tool similar to what Microsoft used for their RestStarterKit but which could be applied to existing services which have REST endpoints attached.  I could not find any but this is now definitely something which I am going to build, as I think it is extremely useful to have but also it should not be too difficult based on the information I now know about the above.  Finally I thought that the Strategy Pattern would lend itself really well to this type of thing so it can accommodate for different languages. I think that is about it, I have included the code for the example Console app which I made below incase anyone wants to have a mooch at the code.  As I said above I want to reformat these to fit in with the current examples over on the Agatha project, but also now thinking about it, make a Documentation Web method…{brain ticking} :-) Cheers for now and here is the final bit of code: static void Main(string[] args) { var request = WebRequest.Create("http://localhost:50348/service.svc/xml/processxmlrequests"); request.Method = "POST"; request.ContentType = "text/xml"; using(var writer = new StreamWriter(request.GetRequestStream())) { writer.WriteLine(GetExampleRequestsString("Category1")); } var response = request.GetResponse(); using(var reader = new StreamReader(response.GetResponseStream())) { Console.WriteLine(reader.ReadToEnd()); } Console.ReadLine(); } static string GetExampleRequestsString(string categoryName) { var data = File.ReadAllText(Path.Combine(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location), "ExampleRequests.xml")); data = data.Replace("{CategoryName}", categoryName); return data; } }

    Read the article

  • Parallelism in .NET – Part 15, Making Tasks Run: The TaskScheduler

    - by Reed
    In my introduction to the Task class, I specifically made mention that the Task class does not directly provide it’s own execution.  In addition, I made a strong point that the Task class itself is not directly related to threads or multithreading.  Rather, the Task class is used to implement our decomposition of tasks.  Once we’ve implemented our tasks, we need to execute them.  In the Task Parallel Library, the execution of Tasks is handled via an instance of the TaskScheduler class. The TaskScheduler class is an abstract class which provides a single function: it schedules the tasks and executes them within an appropriate context.  This class is the class which actually runs individual Task instances.  The .NET Framework provides two (internal) implementations of the TaskScheduler class. Since a Task, based on our decomposition, should be a self-contained piece of code, parallel execution makes sense when executing tasks.  The default implementation of the TaskScheduler class, and the one most often used, is based on the ThreadPool.  This can be retrieved via the TaskScheduler.Default property, and is, by default, what is used when we just start a Task instance with Task.Start(). Normally, when a Task is started by the default TaskScheduler, the task will be treated as a single work item, and run on a ThreadPool thread.  This pools tasks, and provides Task instances all of the advantages of the ThreadPool, including thread pooling for reduced resource usage, and an upper cap on the number of work items.  In addition, .NET 4 brings us a much improved thread pool, providing work stealing and reduced locking within the thread pool queues.  By using the default TaskScheduler, our Tasks are run asynchronously on the ThreadPool. There is one notable exception to my above statements when using the default TaskScheduler.  If a Task is created with the TaskCreationOptions set to TaskCreationOptions.LongRunning, the default TaskScheduler will generate a new thread for that Task, at least in the current implementation.  This is useful for Tasks which will persist for most of the lifetime of your application, since it prevents your Task from starving the ThreadPool of one of it’s work threads. The Task Parallel Library provides one other implementation of the TaskScheduler class.  In addition to providing a way to schedule tasks on the ThreadPool, the framework allows you to create a TaskScheduler which works within a specified SynchronizationContext.  This scheduler can be retrieved within a thread that provides a valid SynchronizationContext by calling the TaskScheduler.FromCurrentSynchronizationContext() method. This implementation of TaskScheduler is intended for use with user interface development.  Windows Forms and Windows Presentation Foundation both require any access to user interface controls to occur on the same thread that created the control.  For example, if you want to set the text within a Windows Forms TextBox, and you’re working on a background thread, that UI call must be marshaled back onto the UI thread.  The most common way this is handled depends on the framework being used.  In Windows Forms, Control.Invoke or Control.BeginInvoke is most often used.  In WPF, the equivelent calls are Dispatcher.Invoke or Dispatcher.BeginInvoke. As an example, say we’re working on a background thread, and we want to update a TextBlock in our user interface with a status label.  The code would typically look something like: // Within background thread work... string status = GetUpdatedStatus(); Dispatcher.BeginInvoke(DispatcherPriority.Normal, new Action( () => { statusLabel.Text = status; })); // Continue on in background method .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This works fine, but forces your method to take a dependency on WPF or Windows Forms.  There is an alternative option, however.  Both Windows Forms and WPF, when initialized, setup a SynchronizationContext in their thread, which is available on the UI thread via the SynchronizationContext.Current property.  This context is used by classes such as BackgroundWorker to marshal calls back onto the UI thread in a framework-agnostic manner. The Task Parallel Library provides the same functionality via the TaskScheduler.FromCurrentSynchronizationContext() method.  When setting up our Tasks, as long as we’re working on the UI thread, we can construct a TaskScheduler via: TaskScheduler uiScheduler = TaskScheduler.FromCurrentSynchronizationContext(); We then can use this scheduler on any thread to marshal data back onto the UI thread.  For example, our code above can then be rewritten as: string status = GetUpdatedStatus(); (new Task(() => { statusLabel.Text = status; })) .Start(uiScheduler); // Continue on in background method This is nice since it allows us to write code that isn’t tied to Windows Forms or WPF, but is still fully functional with those technologies.  I’ll discuss even more uses for the SynchronizationContext based TaskScheduler when I demonstrate task continuations, but even without continuations, this is a very useful construct. In addition to the two implementations provided by the Task Parallel Library, it is possible to implement your own TaskScheduler.  The ParallelExtensionsExtras project within the Samples for Parallel Programming provides nine sample TaskScheduler implementations.  These include schedulers which restrict the maximum number of concurrent tasks, run tasks on a single threaded apartment thread, use a new thread per task, and more.

    Read the article

  • CodePlex Daily Summary for Monday, March 01, 2010

    CodePlex Daily Summary for Monday, March 01, 2010New ProjectsActiveWorlds World Server Admin PowerShell SnapIn: The purpose of this PowerShell SnapIn is to provide a set of tools to administer the world server from PowerShell. It leverages the ActiveWorlds S...AWS SimpleDB Browser: A basic GUI browser tool for inspection and querying of a SimpleDB.Desktop Dimmer: A simple application for dimming the desktop around windows, videos, or other media.Disk Defuzzer: Compare chaos of files and folders with customizable SQL queries. This little application scans files in any two folders, generates data in an A...Dynamic Configuration: Dynamic configuration is a (very) small library to provide an API compatible replacement for the System.Configuration.ConfigurationManager class so...Expression Encoder 3 Visual Basic Samples: Visual Basic Sample code that calls the Expression Encoder 3 object model.Extended Character Keyboard: An lightweight onscreen keyboard that allows you to enter special characters like "á" and "û". Also supports adding of 7 custom buttons.FileHasher: This project provides a simple tool for generating and verifying file hashes. I created this to help the QA team I work with. The project is all C#...Fluent Assertions: Fluent interface for writing more natural specifying assertions with more clarity than the traditional assertion syntax such as offered by MSTest, ...Foursquare BlogEngine Widget: A Basic Widget for BlogEngine which displays the last foursquare Check-insGraffiti CMS Events Plugin: Plugin for Graffiti CMS that allows creating Event posts and rendering an Event CalendarHeadCounter: HeadCounter is a raid attendance and loot tracking application for World of Warcraft.HRM Core (QL Nhan Su): This is software about Human Resource Management in Viet Nam ------------ Đây là phần mềm Quản lý nhân sự tiền lương ở Việt Nam (Nghiệp vụ ở Việt Nam)IronPython Silverlight Sharpdevelop Template: This IronPython Silverlight SharpDevelop Template makes it easier for you to make Silverlight applications in IronPython with Sharpdevelop.kingbox: my test code for study vs 2005link_attraente: Projeto Conclusão de CursoORMSharp.Net: ORMSharp.Net https://code.google.com/p/ormsharp/ http://www.sqlite.org/ http://sqlite.phxsoftware.com/ http://sourceforge.net/projects/sqlite-dotnet2/Orz framework: Orz framework is more like a helpful library, if you are develop with DotNet framework 3.0, it will be very useful to you. Orz framework encapsul...OTManager: OTManagerSharePoint URL Ping Tool: The Url Ping Tool is a farm feature in SharePoint that provide additional performance and tracing information that can be used to troubleshoot issu...SunShine: SunShine ProjectToolSuite.ValidationExpression: assembly with regular expression for the RegularExpressionValidator controlTwitual Studio: A Visual Studio 2010 based Twitter client. Now you have one less reason for pressing Alt+Tab. Plus you still look like you're working!Velocity Hosting Tool: A program designed to aid a HT Velocity host in hosting and recording tournaments.Watermarker: Adds watermark on pictures to prevent copy. Icon taken from PICOL. Can work with packs of images.Zack's Fiasco - ASP.NET Script Includer: Script includer to * include scripts (JS or CSS) once and only once. * include the correct format by differentiating between release and build. Th...New ReleasesAll-In-One Code Framework: All-In-One Code Framework 2010-02-28: Improved and Newly Added Examples:For an up-to-date list, please refer to All-In-One Code Framework Sample Catalog. Samples for ASP.NET Name ...All-In-One Code Framework (简体中文): All-In-One Code Framework 2010-02-28: Improved and Newly Added Examples:For an up-to-date list, please refer to All-In-One Code Framework Sample Catalog. Latest Download Link: http://c...AWS SimpleDB Browser: SimpleDbBrowser.zip Initial Release: The initial release of the SimpleDbBrowser. Unzip the file in the archive and place them all in a folder, then run the .exe. No installer is used...BattLineSvc: V1: First release of BattLineSvcCC.Votd Screen Saver: CC.Votd 1.0.10.301: More bug fixes and minor enhancements. Note: Only download the (Screen Saver) version if you plan to manually install. For most users the (Install...Dynamic Configuration: DynamicConfiguration Release 1: Dynamic Configuration DLL release.eIDPT - Cartão de Cidadão .NET Wrapper: EIDPT VB6 Demo Program: Cartão de Cidadão Middleware Application installation (v1.21 or 1.22) is required for proper use of the eID Lib.eIDPT - Cartão de Cidadão .NET Wrapper: eIDPT VB6 Demo Program Source: Cartão de Cidadão Middleware Application installation (v1.21 or 1.22) is required for proper use of the eID Lib.ESPEHA: Espeha 10: 1. Help available on F1 and via context menu '?' 2. Width of categiries view is preserved througb app starts 3. Drag'nd'drop for tasks view allows ...Extended Character Keyboard: OnscreenSCK Beta V1.0: OnscreenSCK Beta Version 1.0Extended Character Keyboard: OnscreenSCK Beta V1.0 Source: OnscreenSCK Beta Version 1.0 Source CodeFileHasher: Console Version v 0.5: This release provides a very basic and minimal command-line utility for generating and validating file hashes. The supported command-line paramete...Furcadia Framework for Third Party Programs: 0.2.3 Epic Wrench: Warning: Untested on Linux.FurcadiaLib\Net\NetProxy.cs: Fixed a bug I made before update. FurcadiaFramework_Example\Demo\IDemo.cs: Ignore me. F...Graffiti CMS Events Plugin: Version 1.0: Initial Release of Events PluginHeadCounter: HeadCounter 1.2.3 'Razorgore': Added "Raider Post" feature for posting details of a particular raider. Added Default Period option to allow selection of Short, Long or Lifetime...Home Access Plus+: v3.0.0.0: Version 3.0.0.0 Release Change Log: Reconfiguration of the web.config Ability to add additional links to homepage via web.config Ability to add...Home Access Plus+: v3.0.1.0: Version 3.0.1.0 Release Change Log: Fixed problem with moving File Changes: ~/bin/chs extranet.dll ~/bin/chs extranet.pdbHome Access Plus+: v3.0.2.0: Version 3.0.2.0 Release Change Log: Fixed problem with stylesheet File Changes: ~/chs.masterHRM Core (QL Nhan Su): HRMCore_src: Source of HRMCoreIRC4N00bz: IRC4N00bz v1.0.0.2: There wasn't much updated this weekend. I updated 2 'raw' events. One is all raw messages and the other is events that arn't caught in the dll. ...IronPython Silverlight Sharpdevelop Template: Version 1 Template: Just unzip it into the Sharpdevelop python templates folder For example: C:\Program Files\SharpDevelop\3.0\AddIns\AddIns\BackendBindings\PythonBi...MDownloader: MDownloader-0.15.4.56156: Fixed handling exceptions; previous handling could lead to freezing items state; Fixed validating uploading.com links;OTManager: Activity Log: 2010.02.28 >> Thread Reopened 2010.02.28 >> Re-organized WBD Features/WMBD Features 2010.02.28 >> Project status is active againPicasa Downloader: PicasaDownloader (41175): NOTE: The previous release was accidently the same as the one before that (forgot to rebuild the installer). Changelog: Fixed workitem 10296 (Sav...PicNet Html Table Filter: Version 2.0: Testing w/ JQuery 1.3.2Program Scheduler: Program Scheduler 1.1.4: Release Note: *Bug fix : If the log window is docked and user moves the log window , main window will move too. *Added menu to log window to clear...QueryToGrid Module for DotNetNuke®: QueryToGrid Module version 01.00.00: This is the initial release of this module. Remember... This is just a proof of concept to add AJAX functionality to your DotNetNuke modules.Rainweaver Framework: February 2010 Release: Code drop including an Alpha release of the Entity System. See more information in the Documentation page.RapidWebDev - .NET Enterprise Software Development Infrastructure: ProductManagement Quick Sample 0.1: This is a sample product management application to demonstrate how to develop enterprise software in RapidWebDev. The glossary of the system are ro...Team Foundation Server Revision Labeller for CruiseControl.NET: TFS Labeller for CruiseControl.NET - TFS 2008: ReleaseFirst release of the Team Foundation Server Labeller for CruiseControl.NET. This specific version is bound to TFS 2008 DLLs.ToolSuite.ValidationExpression: 01.00.01.000: first release of the time validation class; the assembly file is ready to use, the documentation ist not complete;VCC: Latest build, v2.1.30228.0: Automatic drop of latest buildWatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.7.00: Whats New FileBrowser: Non Admin Users will only see a User Sub folder (..\Portals\0\userfiles\UserName) CKFinder: Non Admin Users will only see ...Watermarker: Watermarker: first public version. can build watermark only in left top corner on one image at once.While You Were Away - WPF Screensaver: Initial Release: This is the code released when the article went live.Most Popular ProjectsMetaSharpRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Microsoft SQL Server Community & SamplesASP.NETDotNetNuke® Community EditionMost Active ProjectsRawrBlogEngine.NETMapWindow GISCommon Context Adapterspatterns & practices – Enterprise LibrarySharpMap - Geospatial Application Framework for the CLRSLARToolkit - Silverlight Augmented Reality ToolkitDiffPlex - a .NET Diff GeneratorRapid Entity Framework. (ORM). CTP 2jQuery Library for SharePoint Web Services

    Read the article

  • Entity Association Mapping with Code First Part 1 : Mapping Complex Types

    - by mortezam
    Last week the CTP5 build of the new Entity Framework Code First has been released by data team at Microsoft. Entity Framework Code-First provides a pretty powerful code-centric way to work with the databases. When it comes to associations, it brings ultimate flexibility. I’m a big fan of the EF Code First approach and am planning to explain association mapping with code first in a series of blog posts and this one is dedicated to Complex Types. If you are new to Code First approach, you can find a great walkthrough here. In order to build a solid foundation for our discussion, we will start by learning about some of the core concepts around the relationship mapping.   What is Mapping?Mapping is the act of determining how objects and their relationships are persisted in permanent data storage, in our case, relational databases. What is Relationship mapping?A mapping that describes how to persist a relationship (association, aggregation, or composition) between two or more objects. Types of RelationshipsThere are two categories of object relationships that we need to be concerned with when mapping associations. The first category is based on multiplicity and it includes three types: One-to-one relationships: This is a relationship where the maximums of each of its multiplicities is one. One-to-many relationships: Also known as a many-to-one relationship, this occurs when the maximum of one multiplicity is one and the other is greater than one. Many-to-many relationships: This is a relationship where the maximum of both multiplicities is greater than one. The second category is based on directionality and it contains two types: Uni-directional relationships: when an object knows about the object(s) it is related to but the other object(s) do not know of the original object. To put this in EF terminology, when a navigation property exists only on one of the association ends and not on the both. Bi-directional relationships: When the objects on both end of the relationship know of each other (i.e. a navigation property defined on both ends). How Object Relationships Are Implemented in POCO domain models?When the multiplicity is one (e.g. 0..1 or 1) the relationship is implemented by defining a navigation property that reference the other object (e.g. an Address property on User class). When the multiplicity is many (e.g. 0..*, 1..*) the relationship is implemented via an ICollection of the type of other object. How Relational Database Relationships Are Implemented? Relationships in relational databases are maintained through the use of Foreign Keys. A foreign key is a data attribute(s) that appears in one table and must be the primary key or other candidate key in another table. With a one-to-one relationship the foreign key needs to be implemented by one of the tables. To implement a one-to-many relationship we implement a foreign key from the “one table” to the “many table”. We could also choose to implement a one-to-many relationship via an associative table (aka Join table), effectively making it a many-to-many relationship. Introducing the ModelNow, let's review the model that we are going to use in order to implement Complex Type with Code First. It's a simple object model which consist of two classes: User and Address. Each user could have one billing address. The Address information of a User is modeled as a separate class as you can see in the UML model below: In object-modeling terms, this association is a kind of aggregation—a part-of relationship. Aggregation is a strong form of association; it has some additional semantics with regard to the lifecycle of objects. In this case, we have an even stronger form, composition, where the lifecycle of the part is fully dependent upon the lifecycle of the whole. Fine-grained domain models The motivation behind this design was to achieve Fine-grained domain models. In crude terms, fine-grained means “more classes than tables”. For example, a user may have both a billing address and a home address. In the database, you may have a single User table with the columns BillingStreet, BillingCity, and BillingPostalCode along with HomeStreet, HomeCity, and HomePostalCode. There are good reasons to use this somewhat denormalized relational model (performance, for one). In our object model, we can use the same approach, representing the two addresses as six string-valued properties of the User class. But it’s much better to model this using an Address class, where User has the BillingAddress and HomeAddress properties. This object model achieves improved cohesion and greater code reuse and is more understandable. Complex Types: Splitting a Table Across Multiple Types Back to our model, there is no difference between this composition and other weaker styles of association when it comes to the actual C# implementation. But in the context of ORM, there is a big difference: A composed class is often a candidate Complex Type. But C# has no concept of composition—a class or property can’t be marked as a composition. The only difference is the object identifier: a complex type has no individual identity (i.e. no AddressId defined on Address class) which make sense because when it comes to the database everything is going to be saved into one single table. How to implement a Complex Types with Code First Code First has a concept of Complex Type Discovery that works based on a set of Conventions. The convention is that if Code First discovers a class where a primary key cannot be inferred, and no primary key is registered through Data Annotations or the fluent API, then the type will be automatically registered as a complex type. Complex type detection also requires that the type does not have properties that reference entity types (i.e. all the properties must be scalar types) and is not referenced from a collection property on another type. Here is the implementation: public class User{    public int UserId { get; set; }    public string FirstName { get; set; }    public string LastName { get; set; }    public string Username { get; set; }    public Address Address { get; set; }} public class Address {     public string Street { get; set; }     public string City { get; set; }            public string PostalCode { get; set; }        }public class EntityMappingContext : DbContext {     public DbSet<User> Users { get; set; }        } With code first, this is all of the code we need to write to create a complex type, we do not need to configure any additional database schema mapping information through Data Annotations or the fluent API. Database SchemaThe mapping result for this object model is as follows: Limitations of this mappingThere are two important limitations to classes mapped as Complex Types: Shared references is not possible: The Address Complex Type doesn’t have its own database identity (primary key) and so can’t be referred to by any object other than the containing instance of User (e.g. a Shipping class that also needs to reference the same User Address). No elegant way to represent a null reference There is no elegant way to represent a null reference to an Address. When reading from database, EF Code First always initialize Address object even if values in all mapped columns of the complex type are null. This means that if you store a complex type object with all null property values, EF Code First returns a initialized complex type when the owning entity object is retrieved from the database. SummaryIn this post we learned about fine-grained domain models which complex type is just one example of it. Fine-grained is fully supported by EF Code First and is known as the most important requirement for a rich domain model. Complex type is usually the simplest way to represent one-to-one relationships and because the lifecycle is almost always dependent in such a case, it’s either an aggregation or a composition in UML. In the next posts we will revisit the same domain model and will learn about other ways to map a one-to-one association that does not have the limitations of the complex types. References ADO.NET team blog Mapping Objects to Relational Databases Java Persistence with Hibernate

    Read the article

  • usb mouse/keyboard doesn't work with 3.11.0-12-generic kernel

    - by x-yuri
    I can't use my usb keyboard/mouse after upgrade from raring to saucy. The keyboard works in grub menu and if I boot with the previous kernel version (3.8.0-31-generic). My new kernel version is 3.11.0-12-generic. I've got Mad Catz R.A.T.7 wired USB mouse, Canyon CNL-MBMSO02 wired usb mouse and Logitech diNovo Edge wireless keyboard, connected to computer through Logitech Unifying Receiver. Using PS/2 keyboard I've managed to get some information. dmesg says: [ 0.166273] ACPI: bus type USB registered [ 0.166273] usbcore: registered new interface driver usbfs [ 0.166273] usbcore: registered new interface driver hub [ 0.166273] usbcore: registered new device driver usb ... [ 3.534226] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 3.534228] ehci-pci: EHCI PCI platform driver [ 3.534291] ehci-pci 0000:00:1a.7: setting latency timer to 64 [ 3.534299] ehci-pci 0000:00:1a.7: EHCI Host Controller [ 3.534304] ehci-pci 0000:00:1a.7: new USB bus registered, assigned bus number 1 [ 3.534315] ehci-pci 0000:00:1a.7: debug port 1 [ 3.538218] ehci-pci 0000:00:1a.7: cache line size of 64 is not supported [ 3.538231] ehci-pci 0000:00:1a.7: irq 18, io mem 0xd3325400 [ 3.548017] ehci-pci 0000:00:1a.7: USB 2.0 started, EHCI 1.00 [ 3.548042] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002 [ 3.548045] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.548048] usb usb1: Product: EHCI Host Controller [ 3.548050] usb usb1: Manufacturer: Linux 3.11.0-12-generic ehci_hcd [ 3.548053] usb usb1: SerialNumber: 0000:00:1a.7 [ 3.548155] hub 1-0:1.0: USB hub found [ 3.548159] hub 1-0:1.0: 6 ports detected [ 3.548311] ehci-pci 0000:00:1d.7: setting latency timer to 64 [ 3.548319] ehci-pci 0000:00:1d.7: EHCI Host Controller [ 3.548323] ehci-pci 0000:00:1d.7: new USB bus registered, assigned bus number 2 [ 3.548333] ehci-pci 0000:00:1d.7: debug port 1 [ 3.552228] ehci-pci 0000:00:1d.7: cache line size of 64 is not supported [ 3.552239] ehci-pci 0000:00:1d.7: irq 23, io mem 0xd3325000 [ 3.564014] ehci-pci 0000:00:1d.7: USB 2.0 started, EHCI 1.00 [ 3.564044] usb usb2: New USB device found, idVendor=1d6b, idProduct=0002 [ 3.564047] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.564050] usb usb2: Product: EHCI Host Controller [ 3.564052] usb usb2: Manufacturer: Linux 3.11.0-12-generic ehci_hcd [ 3.564056] usb usb2: SerialNumber: 0000:00:1d.7 [ 3.564163] hub 2-0:1.0: USB hub found [ 3.564167] hub 2-0:1.0: 6 ports detected [ 3.564274] ehci-platform: EHCI generic platform driver [ 3.564280] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 3.564281] ohci-platform: OHCI generic platform driver [ 3.564287] uhci_hcd: USB Universal Host Controller Interface driver [ 3.564345] uhci_hcd 0000:00:1a.0: setting latency timer to 64 [ 3.564347] uhci_hcd 0000:00:1a.0: UHCI Host Controller [ 3.564352] uhci_hcd 0000:00:1a.0: new USB bus registered, assigned bus number 3 [ 3.564378] uhci_hcd 0000:00:1a.0: irq 16, io base 0x0000f0c0 [ 3.564402] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.564404] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.564406] usb usb3: Product: UHCI Host Controller [ 3.564408] usb usb3: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.564410] usb usb3: SerialNumber: 0000:00:1a.0 [ 3.564478] hub 3-0:1.0: USB hub found [ 3.564482] hub 3-0:1.0: 2 ports detected [ 3.564589] uhci_hcd 0000:00:1a.1: setting latency timer to 64 [ 3.564592] uhci_hcd 0000:00:1a.1: UHCI Host Controller [ 3.564597] uhci_hcd 0000:00:1a.1: new USB bus registered, assigned bus number 4 [ 3.564623] uhci_hcd 0000:00:1a.1: irq 21, io base 0x0000f0a0 [ 3.564647] usb usb4: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.564649] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.564651] usb usb4: Product: UHCI Host Controller [ 3.564653] usb usb4: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.564654] usb usb4: SerialNumber: 0000:00:1a.1 [ 3.564727] hub 4-0:1.0: USB hub found [ 3.564730] hub 4-0:1.0: 2 ports detected [ 3.564834] uhci_hcd 0000:00:1a.2: setting latency timer to 64 [ 3.564837] uhci_hcd 0000:00:1a.2: UHCI Host Controller [ 3.564843] uhci_hcd 0000:00:1a.2: new USB bus registered, assigned bus number 5 [ 3.564863] uhci_hcd 0000:00:1a.2: irq 18, io base 0x0000f080 [ 3.564885] usb usb5: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.564887] usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.564889] usb usb5: Product: UHCI Host Controller [ 3.564891] usb usb5: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.564892] usb usb5: SerialNumber: 0000:00:1a.2 [ 3.564962] hub 5-0:1.0: USB hub found [ 3.564966] hub 5-0:1.0: 2 ports detected [ 3.565073] uhci_hcd 0000:00:1d.0: setting latency timer to 64 [ 3.565076] uhci_hcd 0000:00:1d.0: UHCI Host Controller [ 3.565081] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 6 [ 3.565101] uhci_hcd 0000:00:1d.0: irq 23, io base 0x0000f060 [ 3.565124] usb usb6: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.565127] usb usb6: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.565128] usb usb6: Product: UHCI Host Controller [ 3.565130] usb usb6: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.565132] usb usb6: SerialNumber: 0000:00:1d.0 [ 3.565195] hub 6-0:1.0: USB hub found [ 3.565198] hub 6-0:1.0: 2 ports detected [ 3.565303] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [ 3.565306] uhci_hcd 0000:00:1d.1: UHCI Host Controller [ 3.565310] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 7 [ 3.565329] uhci_hcd 0000:00:1d.1: irq 19, io base 0x0000f040 [ 3.565352] usb usb7: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.565354] usb usb7: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.565356] usb usb7: Product: UHCI Host Controller [ 3.565358] usb usb7: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.565359] usb usb7: SerialNumber: 0000:00:1d.1 [ 3.565424] hub 7-0:1.0: USB hub found [ 3.565427] hub 7-0:1.0: 2 ports detected [ 3.565534] uhci_hcd 0000:00:1d.2: setting latency timer to 64 [ 3.565537] uhci_hcd 0000:00:1d.2: UHCI Host Controller [ 3.565541] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 8 [ 3.565560] uhci_hcd 0000:00:1d.2: irq 18, io base 0x0000f020 [ 3.565584] usb usb8: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.565587] usb usb8: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.565588] usb usb8: Product: UHCI Host Controller [ 3.565590] usb usb8: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.565592] usb usb8: SerialNumber: 0000:00:1d.2 [ 3.565658] hub 8-0:1.0: USB hub found [ 3.565661] hub 8-0:1.0: 2 ports detected ... [ 4.120014] usb 2-3: new high-speed USB device number 2 using ehci-pci ... [ 4.468908] usb 2-3: New USB device found, idVendor=046d, idProduct=0825 [ 4.468912] usb 2-3: New USB device strings: Mfr=0, Product=0, SerialNumber=2 [ 4.468914] usb 2-3: SerialNumber: AF582E10 ... [ 5.284019] usb 5-2: new full-speed USB device number 2 using uhci_hcd [ 5.465903] usb 5-2: New USB device found, idVendor=046d, idProduct=0b04 [ 5.465908] usb 5-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 5.465911] usb 5-2: Product: Logitech BT Mini-Receiver [ 5.465914] usb 5-2: Manufacturer: Logitech [ 5.468948] hub 5-2:1.0: USB hub found [ 5.470898] hub 5-2:1.0: 3 ports detected [ 5.476096] Switched to clocksource tsc [ 5.712099] usb 7-2: new full-speed USB device number 2 using uhci_hcd [ 5.896366] usb 7-2: New USB device found, idVendor=046d, idProduct=c52b [ 5.896370] usb 7-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 5.896372] usb 7-2: Product: USB Receiver [ 5.896374] usb 7-2: Manufacturer: Logitech [ 6.140016] usb 8-1: new full-speed USB device number 2 using uhci_hcd [ 6.324597] usb 8-1: New USB device found, idVendor=0738, idProduct=1708 [ 6.324603] usb 8-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 6.324605] usb 8-1: Product: Mad Catz R.A.T.7 Mouse [ 6.324608] usb 8-1: Manufacturer: Mad Catz [ 6.564012] usb 8-2: new low-speed USB device number 3 using uhci_hcd [ 6.746602] usb 8-2: New USB device found, idVendor=1d57, idProduct=0010 [ 6.746608] usb 8-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 6.746610] usb 8-2: Product: usb mouse with wheel [ 6.746613] usb 8-2: Manufacturer: HID-Compliant Mouse [ 7.337898] usb 5-2.2: new full-speed USB device number 3 using uhci_hcd [ 7.490902] usb 5-2.2: New USB device found, idVendor=046d, idProduct=c713 [ 7.490907] usb 5-2.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 7.490910] usb 5-2.2: Product: Logitech BT Mini-Receiver [ 7.490913] usb 5-2.2: Manufacturer: Logitech [ 7.490915] usb 5-2.2: SerialNumber: 001F203BD6A7 [ 7.569898] usb 5-2.3: new full-speed USB device number 4 using uhci_hcd [ 7.722901] usb 5-2.3: New USB device found, idVendor=046d, idProduct=c714 [ 7.722906] usb 5-2.3: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 7.722909] usb 5-2.3: Product: Logitech BT Mini-Receiver [ 7.722911] usb 5-2.3: Manufacturer: Logitech [ 7.722913] usb 5-2.3: SerialNumber: 001F203BD6A7 lsusb (more output): Bus 002 Device 002: ID 046d:0825 Logitech, Inc. Webcam C270 Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 008 Device 003: ID 1d57:0010 Xenta Bus 008 Device 002: ID 0738:1708 Mad Catz, Inc. Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 002: ID 046d:c52b Logitech, Inc. Unifying Receiver Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 004: ID 046d:c714 Logitech, Inc. diNovo Edge Keyboard Bus 005 Device 003: ID 046d:c713 Logitech, Inc. Bus 005 Device 002: ID 046d:0b04 Logitech, Inc. Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub More background. Before that I had a problem with logging in to GNOME. During which I upgraded all the packages at one point (apt-get upgrade) and it stopped booting at all (it didn't get to login screen). Then I fixed PATH issue and now I've got this usb-not-working issue. I tried reinstalling kernel, to no effect. Is there anything else I can do to fix or diagnose the problem?

    Read the article

  • Solaris 10 branded zone VM Templates for Solaris 11 on OTN

    - by jsavit
    Early this year I wrote the article Ours Goes To 11 which describes the ability to import Solaris 10 systems into a "Solaris 10 branded zone" under Oracle Solaris 11. I did this using Solaris 11 Express, and the capability remains in Solaris 11 with only slight changes. This important tool lets you painlessly inhaling a Solaris Container from Solaris 10 or entire Solaris 10 systems ("the global zone") into virtualized environments on a Solaris 11 OS. Just recently, Oracle provided Oracle VM Templates for Oracle Solaris 10 Zones to let you create Solaris 10 branded zones for Solaris 11 even if you don't currently have access to install media or a running Solaris 10 system. To use this, just download the Oracle VM Template for Oracle Solaris Zone 10 from OTN at http://www.oracle.com/technetwork/server-storage/solaris11/downloads/virtual-machines-1355605.html. This page contains images of Oracle Solaris 10 8/11 (the recent update to Solaris 10) in SPARC and x86 formats suitable for creating branded zones. The same page also has a VirtualBox image you can download for a complete Solaris 10 install in a guest virtual machine you can run on any host OS that supports VirtualBox. Both sets of downloads provide a quick - and extremely easy - way to set up a virtual Solaris 10 environment. In the case of the Oracle VM Templates, they illustrate several advanced features of Solaris 11. To start, just go to the above link, download the template for the hardware platform (SPARC or x86) you want, and download the README file also linked from that page. Install prerequisites The README file tells you to install the prerequisite Solaris 11 package that implements the Solaris 10 brand. Then you can install instances of zones with that brand. # pkg install pkg:/system/zones/brand/brand-solaris10 Packages to install: 1 Create boot environment: No Create backup boot environment: Yes DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 44/44 0.4/0.4 PHASE ACTIONS Install Phase 74/74 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 That took only a few minutes, and didn't require a reboot. Install the Solaris 10 zone Now it's time to run the downloaded template file. First make it executable via the chmod command, of course. I found that (unlike stated in the README) there was no need to rename the downloaded file to remove the .bin. When you run it you provide several parameters to describe the zone configuration: -a IP address - the IP address and optional netmask for the zone. This is the only mandatory parameter. -z zonename - the name of the zone you would like to create. -i interface - the package will create an exclusive-IP zone using a virtual NIC (vnic) based on this physical interface. In my case, I have a NIC called rge0. -p PATH - specifies the path in which you want the zoneroot to be placed. In my case, I have a ZFS dataset mounted at /zones, and this will create a zoneroot at /zones/s10u10. Kicking it off, you will see a copyright message, and then messages showing progress building the zone, which only takes a few minutes. # ./solaris-10u10-x86.bin -p /zones -a 192.168.1.100 -i rge0 -z s10u10 ... ... Checking disk-space for extraction Ok Extracting in /export/home/CDimages/s10zone/bootimage.ihaqvh ... 100% [===============================] Checking data integrity Ok Checking platform compatibility The host and the image do not have the same Solaris release: host Solaris release: 5.11 image Solaris release: 5.10 Will create a Solaris 10 branded zone. Warning: could not find a defaultrouter Zone won't have any defaultrouter configured IMAGE: ./solaris-10u10-x86.bin ZONE: s10u10 ZONEPATH: /zones/s10u10 INTERFACE: rge0 VNIC: vnicZBI13379 MAC ADDR: 2:8:20:5c:1a:cc IP ADDR: 192.168.1.100 NETMASK: 255.255.255.0 DEFROUTER: NONE TIMEZONE: US/Arizona Checking disk-space for installation Ok Installing in /zones/s10u10 ... 100% [===============================] Using a static exclusive-IP Attaching s10u10 Booting s10u10 Waiting for boot to complete booting... booting... booting... Zone s10u10 booted The zone's root password has been set using the root password of the local host. You can change the zone's root password to further harden the security of the zone: being root, log into the zone from the local host with the command 'zlogin s10u10'. Once logged in, change the root password with the command 'passwd'. The nifty part in my opinion (besides being so easy), is that the zone was created as an exclusive-IP zone on a virtual NIC. This network configuration lets you enforce traffic isolation from other zones, enforce network Quality of Service, and even let the zone set its own characteristics like IP address and packet size. Independence of the zone's network characteristics from the global zone is one of the enhancements in Solaris 10 that make it easier to consolidate zones while preserving their autonomy, yet provide control in a consolidated environment. Let's see what the virtual network environment looks like by issuing commands from the Solaris 11 global zone. First I'll use Old School ifconfig, and then I'll use the new ipadm and dladm commands. # ifconfig -a4 lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 rge0: flags=1004943<UP,BROADCAST,RUNNING,PROMISC,MULTICAST,DHCP,IPv4> mtu 1500 index 2 inet 192.168.1.3 netmask ffffff00 broadcast 192.168.1.255 ether 0:14:d1:18:ac:bc vboxnet0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3 inet 192.168.56.1 netmask ffffff00 broadcast 192.168.56.255 ether 8:0:27:f8:62:1c # dladm show-phys LINK MEDIA STATE SPEED DUPLEX DEVICE yge0 Ethernet unknown 0 unknown yge0 yge1 Ethernet unknown 0 unknown yge1 rge0 Ethernet up 1000 full rge0 vboxnet0 Ethernet up 1000 full vboxnet0 # dladm show-link LINK CLASS MTU STATE OVER yge0 phys 1500 unknown -- yge1 phys 1500 unknown -- rge0 phys 1500 up -- vboxnet0 phys 1500 up -- vnicZBI13379 vnic 1500 up rge0 s10u10/vnicZBI13379 vnic 1500 up rge0 s10u10/net0 vnic 1500 up rge0 # dladm show-vnic LINK OVER SPEED MACADDRESS MACADDRTYPE VID vnicZBI13379 rge0 1000 2:8:20:5c:1a:cc random 0 s10u10/vnicZBI13379 rge0 1000 2:8:20:5c:1a:cc random 0 s10u10/net0 rge0 1000 2:8:20:9d:d0:79 random 0 # ipadm show-addr ADDROBJ TYPE STATE ADDR lo0/v4 static ok 127.0.0.1/8 rge0/_a dhcp ok 192.168.1.3/24 vboxnet0/_a static ok 192.168.56.1/24 lo0/v6 static ok ::1/128 Log into the zone The install step already booted the zone, so lets log into it. Notice how you have to be appropriately privileged to log into a zone. This is my home system so I'm being a bit cavalier, but in a production environment you can give granular control of who can login to which zones. Voila! a Solaris 10 environment under a Solaris 11 kernel. Notice the output from the uname -a and ifconfig commands, and output from a ping to a nearby host. $ zlogin s10u10 zlogin: You lack sufficient privilege to run this command (all privs required) savit@home:~$ sudo zlogin s10u10 Password: [Connected to zone 's10u10' pts/5] Oracle Corporation SunOS 5.10 Generic Patch January 2005 # uname -a SunOS s10u10 5.10 Generic_Virtual i86pc i386 i86pc # ifconfig -a4 lo0: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vnicZBI13379: flags=1000843 mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 2:8:20:5c:1a:cc # bash bash-3.2# ifconfig -a lo0: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vnicZBI13379: flags=1000843 mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 2:8:20:5c:1a:cc bash-3.2# ping 192.168.1.2 192.168.1.2 is alive For fun, I configured Apache (setting its configuration file in /etc/apache2) and brought it up. Easy - took just a few minutes. bash-3.2# svcs apache2 STATE STIME FMRI disabled 12:38:46 svc:/network/http:apache2 bash-3.2# svcadm enable apache2 Summary In just a few minutes, I built a functioning virtual Solaris 10 environment under by Solaris 11 system. It was... easy! While I can still do it the manual way (creating and using a system archive), this is a low-effort way to create a Solaris 10 zone on Solaris 11.

    Read the article

  • Sending Messages to SignalR Hubs from the Outside

    - by Ricardo Peres
    Introduction You are by now probably familiarized with SignalR, Microsoft’s API for real-time web functionality. This is, in my opinion, one of the greatest products Microsoft has released in recent time. Usually, people login to a site and enter some page which is connected to a SignalR hub. Then they can send and receive messages – not just text messages, mind you – to other users in the same hub. Also, the server can also take the initiative to send messages to all or a specified subset of users on its own, this is known as server push. The normal flow is pretty straightforward, Microsoft has done a great job with the API, it’s clean and quite simple to use. And for the latter – the server taking the initiative – it’s also quite simple, just involves a little more work. The Problem The API for sending messages can be achieved from inside a hub – an instance of the Hub class – which is something that we don’t have if we are the server and we want to send a message to some user or group of users: the Hub instance is only instantiated in response to a client message. The Solution It is possible to acquire a hub’s context from outside of an actual Hub instance, by calling GlobalHost.ConnectionManager.GetHubContext<T>(). This API allows us to: Broadcast messages to all connected clients (possibly excluding some); Send messages to a specific client; Send messages to a group of clients. So, we have groups and clients, each is identified by a string. Client strings are called connection ids and group names are free-form, given by us. The problem with client strings is, we do not know how these map to actual users. One way to achieve this mapping is by overriding the Hub’s OnConnected and OnDisconnected methods and managing the association there. Here’s an example: 1: public class MyHub : Hub 2: { 3: private static readonly IDictionary<String, ISet<String>> users = new ConcurrentDictionary<String, ISet<String>>(); 4:  5: public static IEnumerable<String> GetUserConnections(String username) 6: { 7: ISet<String> connections; 8:  9: users.TryGetValue(username, out connections); 10:  11: return (connections ?? Enumerable.Empty<String>()); 12: } 13:  14: private static void AddUser(String username, String connectionId) 15: { 16: ISet<String> connections; 17:  18: if (users.TryGetValue(username, out connections) == false) 19: { 20: connections = users[username] = new HashSet<String>(); 21: } 22:  23: connections.Add(connectionId); 24: } 25:  26: private static void RemoveUser(String username, String connectionId) 27: { 28: users[username].Remove(connectionId); 29: } 30:  31: public override Task OnConnected() 32: { 33: AddUser(this.Context.Request.User.Identity.Name, this.Context.ConnectionId); 34: return (base.OnConnected()); 35: } 36:  37: public override Task OnDisconnected() 38: { 39: RemoveUser(this.Context.Request.User.Identity.Name, this.Context.ConnectionId); 40: return (base.OnDisconnected()); 41: } 42: } As you can see, I am using a static field to store the mapping between a user and its possibly many connections – for example, multiple open browser tabs or even multiple browsers accessing the same page with the same login credentials. The user identity, as is normal in .NET, is obtained from the IPrincipal which in SignalR hubs case is stored in Context.Request.User. Of course, this property will only have a meaningful value if we enforce authentication. Another way to go is by creating a group for each user that connects: 1: public class MyHub : Hub 2: { 3: public override Task OnConnected() 4: { 5: this.Groups.Add(this.Context.ConnectionId, this.Context.Request.User.Identity.Name); 6: return (base.OnConnected()); 7: } 8:  9: public override Task OnDisconnected() 10: { 11: this.Groups.Remove(this.Context.ConnectionId, this.Context.Request.User.Identity.Name); 12: return (base.OnDisconnected()); 13: } 14: } In this case, we will have a one-to-one equivalence between users and groups. All connections belonging to the same user will fall in the same group. So, if we want to send messages to a user from outside an instance of the Hub class, we can do something like this, for the first option – user mappings stored in a static field: 1: public void SendUserMessage(String username, String message) 2: { 3: var context = GlobalHost.ConnectionManager.GetHubContext<MyHub>(); 4: 5: foreach (String connectionId in HelloHub.GetUserConnections(username)) 6: { 7: context.Clients.Client(connectionId).sendUserMessage(message); 8: } 9: } And for using groups, its even simpler: 1: public void SendUserMessage(String username, String message) 2: { 3: var context = GlobalHost.ConnectionManager.GetHubContext<MyHub>(); 4:  5: context.Clients.Group(username).sendUserMessage(message); 6: } Using groups has the advantage that the IHubContext interface returned from GetHubContext has direct support for groups, no need to send messages to individual connections. Of course, you can wrap both mapping options in a common API, perhaps exposed through IoC. One example of its interface might be: 1: public interface IUserToConnectionMappingService 2: { 3: //associate and dissociate connections to users 4:  5: void AddUserConnection(String username, String connectionId); 6:  7: void RemoveUserConnection(String username, String connectionId); 8: } SignalR has built-in dependency resolution, by means of the static GlobalHost.DependencyResolver property: 1: //for using groups (in the Global class) 2: GlobalHost.DependencyResolver.Register(typeof(IUserToConnectionMappingService), () => new GroupsMappingService()); 3:  4: //for using a static field (in the Global class) 5: GlobalHost.DependencyResolver.Register(typeof(IUserToConnectionMappingService), () => new StaticMappingService()); 6:  7: //retrieving the current service (in the Hub class) 8: var mapping = GlobalHost.DependencyResolver.Resolve<IUserToConnectionMappingService>(); Now all you have to do is implement GroupsMappingService and StaticMappingService with the code I shown here and change SendUserMessage method to rely in the dependency resolver for the actual implementation. Stay tuned for more SignalR posts!

    Read the article

  • IIS 7&rsquo;s Sneaky Secret to Get COM-InterOp to Run

    - by David Hoerster
    Originally posted on: http://geekswithblogs.net/DavidHoerster/archive/2013/06/17/iis-7rsquos-sneaky-secret-to-get-com-interop-to-run.aspxIf you’re like me, you don’t really do a lot with COM components these days.  For me, I’ve been ‘lucky’ to stay in the managed world for the past 6 or 7 years. Until last week. I’m running a project to upgrade a web interface to an older COM-based application.  The old web interface is all classic ASP and lots of tables, in-line styles and a bunch of other late 90’s and early 2000’s goodies.  So in addition to updating the UI to be more modern looking and responsive, I decided to give the server side an update, too.  So I built some COM-InterOp DLL’s (easily through VS2012’s Add Reference feature…nothing new here) and built a test console line app to make sure the COM DLL’s were actually built according to the COM spec.  There’s a document management system that I’m thinking of whose COM DLLs were not proper COM DLLs and crashed and burned every time .NET tried to call them through a COM-InterOp layer. Anyway, my test app worked like a champ and I felt confident that I could build a nice façade around the COM DLL’s and wrap some functionality internally and only expose to my users/clients what they really needed. So I did this, built some tests and also built a test web app to make sure everything worked great.  It did.  It ran fine in IIS Express via Visual Studio 2012, and the timings were very close to the pure Classic ASP calls, so there wasn’t much overhead involved going through the COM-InterOp layer. You know where this is going, don’t you? So I deployed my test app to a DEV server running IIS 7.5.  When I went to my first test page that called the COM-InterOp layer, I got this pretty message: Retrieving the COM class factory for component with CLSID {81C08CAE-1453-11D4-BEBC-00500457076D} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)). It worked as a console app and while running under IIS Express, so it must be permissions, right?  I gave every account I could think of all sorts of COM+ rights and nothing, nada, zilch! Then I came across this question on Experts Exchange, and at the bottom of the page, someone mentioned that the app pool should be running to allow 32-bit apps to run.  Oh yeah, my machine is 64-bit; these COM DLL’s I’m using are old and are definitely 32-bit.  I didn’t check for that and didn’t even think about that.  But I went ahead and looked at the app pool that my web site was running under and what did I see?  Yep, select your app pool in IIS 7.x, click on Advanced Settings and check for “Enable 32-bit Applications”. I went ahead and set it to True and my test application suddenly worked. Hope this helps somebody out there from pulling out your hair.

    Read the article

  • The broken Promise of the Mobile Web

    - by Rick Strahl
    High end mobile devices have been with us now for almost 7 years and they have utterly transformed the way we access information. Mobile phones and smartphones that have access to the Internet and host smart applications are in the hands of a large percentage of the population of the world. In many places even very remote, cell phones and even smart phones are a common sight. I’ll never forget when I was in India in 2011 I was up in the Southern Indian mountains riding an elephant out of a tiny local village, with an elephant herder in front riding atop of the elephant in front of us. He was dressed in traditional garb with the loin wrap and head cloth/turban as did quite a few of the locals in this small out of the way and not so touristy village. So we’re slowly trundling along in the forest and he’s lazily using his stick to guide the elephant and… 10 minutes in he pulls out his cell phone from his sash and starts texting. In the middle of texting a huge pig jumps out from the side of the trail and he takes a picture running across our path in the jungle! So yeah, mobile technology is very pervasive and it’s reached into even very buried and unexpected parts of this world. Apps are still King Apps currently rule the roost when it comes to mobile devices and the applications that run on them. If there’s something that you need on your mobile device your first step usually is to look for an app, not use your browser. But native app development remains a pain in the butt, with the requirement to have to support 2 or 3 completely separate platforms. There are solutions that try to bridge that gap. Xamarin is on a tear at the moment, providing their cross-device toolkit to build applications using C#. While Xamarin tools are impressive – and also *very* expensive – they only address part of the development madness that is app development. There are still specific device integration isssues, dealing with the different developer programs, security and certificate setups and all that other noise that surrounds app development. There’s also PhoneGap/Cordova which provides a hybrid solution that involves creating local HTML/CSS/JavaScript based applications, and then packaging them to run in a specialized App container that can run on most mobile device platforms using a WebView interface. This allows for using of HTML technology, but it also still requires all the set up, configuration of APIs, security keys and certification and submission and deployment process just like native applications – you actually lose many of the benefits that  Web based apps bring. The big selling point of Cordova is that you get to use HTML have the ability to build your UI once for all platforms and run across all of them – but the rest of the app process remains in place. Apps can be a big pain to create and manage especially when we are talking about specialized or vertical business applications that aren’t geared at the mainstream market and that don’t fit the ‘store’ model. If you’re building a small intra department application you don’t want to deal with multiple device platforms and certification etc. for various public or corporate app stores. That model is simply not a good fit both from the development and deployment perspective. Even for commercial, big ticket apps, HTML as a UI platform offers many advantages over native, from write-once run-anywhere, to remote maintenance, single point of management and failure to having full control over the application as opposed to have the app store overloads censor you. In a lot of ways Web based HTML/CSS/JavaScript applications have so much potential for building better solutions based on existing Web technologies for the very same reasons a lot of content years ago moved off the desktop to the Web. To me the Web as a mobile platform makes perfect sense, but the reality of today’s Mobile Web unfortunately looks a little different… Where’s the Love for the Mobile Web? Yet here we are in the middle of 2014, nearly 7 years after the first iPhone was released and brought the promise of rich interactive information at your fingertips, and yet we still don’t really have a solid mobile Web platform. I know what you’re thinking: “But we have lots of HTML/JavaScript/CSS features that allows us to build nice mobile interfaces”. I agree to a point – it’s actually quite possible to build nice looking, rich and capable Web UI today. We have media queries to deal with varied display sizes, CSS transforms for smooth animations and transitions, tons of CSS improvements in CSS 3 that facilitate rich layout, a host of APIs geared towards mobile device features and lately even a number of JavaScript framework choices that facilitate development of multi-screen apps in a consistent manner. Personally I’ve been working a lot with AngularJs and heavily modified Bootstrap themes to build mobile first UIs and that’s been working very well to provide highly usable and attractive UI for typical mobile business applications. From the pure UI perspective things actually look very good. Not just about the UI But it’s not just about the UI - it’s also about integration with the mobile device. When it comes to putting all those pieces together into what amounts to a consolidated platform to build mobile Web applications, I think we still have a ways to go… there are a lot of missing pieces to make it all work together and integrate with the device more smoothly, and more importantly to make it work uniformly across the majority of devices. I think there are a number of reasons for this. Slow Standards Adoption HTML standards implementations and ratification has been dreadfully slow, and browser vendors all seem to pick and choose different pieces of the technology they implement. The end result is that we have a capable UI platform that’s missing some of the infrastructure pieces to make it whole on mobile devices. There’s lots of potential but what is lacking that final 10% to build truly compelling mobile applications that can compete favorably with native applications. Some of it is the fragmentation of browsers and the slow evolution of the mobile specific HTML APIs. A host of mobile standards exist but many of the standards are in the early review stage and they have been there stuck for long periods of time and seem to move at a glacial pace. Browser vendors seem even slower to implement them, and for good reason – non-ratified standards mean that implementations may change and vendor implementations tend to be experimental and  likely have to be changed later. Neither Vendors or developers are not keen on changing standards. This is the typical chicken and egg scenario, but without some forward momentum from some party we end up stuck in the mud. It seems that either the standards bodies or the vendors need to carry the torch forward and that doesn’t seem to be happening quickly enough. Mobile Device Integration just isn’t good enough Current standards are not far reaching enough to address a number of the use case scenarios necessary for many mobile applications. While not every application needs to have access to all mobile device features, almost every mobile application could benefit from some integration with other parts of the mobile device platform. Integration with GPS, phone, media, messaging, notifications, linking and contacts system are benefits that are unique to mobile applications and could be widely used, but are mostly (with the exception of GPS) inaccessible for Web based applications today. Unfortunately trying to do most of this today only with a mobile Web browser is a losing battle. Aside from PhoneGap/Cordova’s app centric model with its own custom API accessing mobile device features and the token exception of the GeoLocation API, most device integration features are not widely supported by the current crop of mobile browsers. For example there’s no usable messaging API that allows access to SMS or contacts from HTML. Even obvious components like the Media Capture API are only implemented partially by mobile devices. There are alternatives and workarounds for some of these interfaces by using browser specific code, but that’s might ugly and something that I thought we were trying to leave behind with newer browser standards. But it’s not quite working out that way. It’s utterly perplexing to me that mobile standards like Media Capture and Streams, Media Gallery Access, Responsive Images, Messaging API, Contacts Manager API have only minimal or no traction at all today. Keep in mind we’ve had mobile browsers for nearly 7 years now, and yet we still have to think about how to get access to an image from the image gallery or the camera on some devices? Heck Windows Phone IE Mobile just gained the ability to upload images recently in the Windows 8.1 Update – that’s feature that HTML has had for 20 years! These are simple concepts and common problems that should have been solved a long time ago. It’s extremely frustrating to see build 90% of a mobile Web app with relative ease and then hit a brick wall for the remaining 10%, which often can be show stoppers. The remaining 10% have to do with platform integration, browser differences and working around the limitations that browsers and ‘pinned’ applications impose on HTML applications. The maddening part is that these limitations seem arbitrary as they could easily work on all mobile platforms. For example, SMS has a URL Moniker interface that sort of works on Android, works badly with iOS (only works if the address is already in the contact list) and not at all on Windows Phone. There’s no reason this shouldn’t work universally using the same interface – after all all phones have supported SMS since before the year 2000! But, it doesn’t have to be this way Change can happen very quickly. Take the GeoLocation API for example. Geolocation has taken off at the very beginning of the mobile device era and today it works well, provides the necessary security (a big concern for many mobile APIs), and is supported by just about all major mobile and even desktop browsers today. It handles security concerns via prompts to avoid unwanted access which is a model that would work for most other device APIs in a similar fashion. One time approval and occasional re-approval if code changes or caches expire. Simple and only slightly intrusive. It all works well, even though GeoLocation actually has some physical limitations, such as representing the current location when no GPS device is present. Yet this is a solved problem, where other APIs that are conceptually much simpler to implement have failed to gain any traction at all. Technically none of these APIs should be a problem to implement, but it appears that the momentum is just not there. Inadequate Web Application Linking and Activation Another important piece of the puzzle missing is the integration of HTML based Web applications. Today HTML based applications are not first class citizens on mobile operating systems. When talking about HTML based content there’s a big difference between content and applications. Content is great for search engine discovery and plain browser usage. Content is usually accessed intermittently and permanent linking is not so critical for this type of content.  But applications have different needs. Applications need to be started up quickly and must be easily switchable to support a multi-tasking user workflow. Therefore, it’s pretty crucial that mobile Web apps are integrated into the underlying mobile OS and work with the standard task management features. Unfortunately this integration is not as smooth as it should be. It starts with actually trying to find mobile Web applications, to ‘installing’ them onto a phone in an easily accessible manner in a prominent position. The experience of discovering a Mobile Web ‘App’ and making it sticky is by no means as easy or satisfying. Today the way you’d go about this is: Open the browser Search for a Web Site in the browser with your search engine of choice Hope that you find the right site Hope that you actually find a site that works for your mobile device Click on the link and run the app in a fully chrome’d browser instance (read tiny surface area) Pin the app to the home screen (with all the limitations outline above) Hope you pointed at the right URL when you pinned Even for you and me as developers, there are a few steps in there that are painful and annoying, but think about the average user. First figuring out how to search for a specific site or URL? And then pinning the app and hopefully from the right location? You’ve probably lost more than half of your audience at that point. This experience sucks. For developers too this process is painful since app developers can’t control the shortcut creation directly. This problem often gets solved by crazy coding schemes, with annoying pop-ups that try to get people to create shortcuts via fancy animations that are both annoying and add overhead to each and every application that implements this sort of thing differently. And that’s not the end of it - getting the link onto the home screen with an application icon varies quite a bit between browsers. Apple’s non-standard meta tags are prominent and they work with iOS and Android (only more recent versions), but not on Windows Phone. Windows Phone instead requires you to create an actual screen or rather a partial screen be captured for a shortcut in the tile manager. Who had that brilliant idea I wonder? Surprisingly Chrome on recent Android versions seems to actually get it right – icons use pngs, pinning is easy and pinned applications properly behave like standalone apps and retain the browser’s active page state and content. Each of the platforms has a different way to specify icons (WP doesn’t allow you to use an icon image at all), and the most widely used interface in use today is a bunch of Apple specific meta tags that other browsers choose to support. The question is: Why is there no standard implementation for installing shortcuts across mobile platforms using an official format rather than a proprietary one? Then there’s iOS and the crazy way it treats home screen linked URLs using a crazy hybrid format that is neither as capable as a Web app running in Safari nor a WebView hosted application. Moving off the Web ‘app’ link when switching to another app actually causes the browser and preview it to ‘blank out’ the Web application in the Task View (see screenshot on the right). Then, when the ‘app’ is reactivated it ends up completely restarting the browser with the original link. This is crazy behavior that you can’t easily work around. In some situations you might be able to store the application state and restore it using LocalStorage, but for many scenarios that involve complex data sources (like say Google Maps) that’s not a possibility. The only reason for this screwed up behavior I can think of is that it is deliberate to make Web apps a pain in the butt to use and forcing users trough the App Store/PhoneGap/Cordova route. App linking and management is a very basic problem – something that we essentially have solved in every desktop browser – yet on mobile devices where it arguably matters a lot more to have easy access to web content we have to jump through hoops to have even a remotely decent linking/activation experience across browsers. Where’s the Money? It’s not surprising that device home screen integration and Mobile Web support in general is in such dismal shape – the mobile OS vendors benefit financially from App store sales and have little to gain from Web based applications that bypass the App store and the cash cow that it presents. On top of that, platform specific vendor lock-in of both end users and developers who have invested in hardware, apps and consumables is something that mobile platform vendors actually aspire to. Web based interfaces that are cross-platform are the anti-thesis of that and so again it’s no surprise that the mobile Web is on a struggling path. But – that may be changing. More and more we’re seeing operations shifting to services that are subscription based or otherwise collect money for usage, and that may drive more progress into the Web direction in the end . Nothing like the almighty dollar to drive innovation forward. Do we need a Mobile Web App Store? As much as I dislike moderated experiences in today’s massive App Stores, they do at least provide one single place to look for apps for your device. I think we could really use some sort of registry, that could provide something akin to an app store for mobile Web apps, to make it easier to actually find mobile applications. This could take the form of a specialized search engine, or maybe a more formal store/registry like structure. Something like apt-get/chocolatey for Web apps. It could be curated and provide at least some feedback and reviews that might help with the integrity of applications. Coupled to that could be a native application on each platform that would allow searching and browsing of the registry and then also handle installation in the form of providing the home screen linking, plus maybe an initial security configuration that determines what features are allowed access to for the app. I’m not holding my breath. In order for this sort of thing to take off and gain widespread appeal, a lot of coordination would be required. And in order to get enough traction it would have to come from a well known entity – a mobile Web app store from a no name source is unlikely to gain high enough usage numbers to make a difference. In a way this would eliminate some of the freedom of the Web, but of course this would also be an optional search path in addition to the standard open Web search mechanisms to find and access content today. Security Security is a big deal, and one of the perceived reasons why so many IT professionals appear to be willing to go back to the walled garden of deployed apps is that Apps are perceived as safe due to the official review and curation of the App stores. Curated stores are supposed to protect you from malware, illegal and misleading content. It doesn’t always work out that way and all the major vendors have had issues with security and the review process at some time or another. Security is critical, but I also think that Web applications in general pose less of a security threat than native applications, by nature of the sandboxed browser and JavaScript environments. Web applications run externally completely and in the HTML and JavaScript sandboxes, with only a very few controlled APIs allowing access to device specific features. And as discussed earlier – security for any device interaction can be granted the same for mobile applications through a Web browser, as they can for native applications either via explicit policies loaded from the Web, or via prompting as GeoLocation does today. Security is important, but it’s certainly solvable problem for Web applications even those that need to access device hardware. Security shouldn’t be a reason for Web apps to be an equal player in mobile applications. Apps are winning, but haven’t we been here before? So now we’re finding ourselves back in an era of installed app, rather than Web based and managed apps. Only it’s even worse today than with Desktop applications, in that the apps are going through a gatekeeper that charges a toll and censors what you can and can’t do in your apps. Frankly it’s a mystery to me why anybody would buy into this model and why it’s lasted this long when we’ve already been through this process. It’s crazy… It’s really a shame that this regression is happening. We have the technology to make mobile Web apps much more prominent, but yet we’re basically held back by what seems little more than bureaucracy, partisan bickering and self interest of the major parties involved. Back in the day of the desktop it was Internet Explorer’s 98+%  market shareholding back the Web from improvements for many years – now it’s the combined mobile OS market in control of the mobile browsers. If mobile Web apps were allowed to be treated the same as native apps with simple ways to install and run them consistently and persistently, that would go a long way to making mobile applications much more usable and seriously viable alternatives to native apps. But as it is mobile apps have a severe disadvantage in placement and operation. There are a few bright spots in all of this. Mozilla’s FireFoxOs is embracing the Web for it’s mobile OS by essentially building every app out of HTML and JavaScript based content. It supports both packaged and certified package modes (that can be put into the app store), and Open Web apps that are loaded and run completely off the Web and can also cache locally for offline operation using a manifest. Open Web apps are treated as full class citizens in FireFoxOS and run using the same mechanism as installed apps. Unfortunately FireFoxOs is getting a slow start with minimal device support and specifically targeting the low end market. We can hope that this approach will change and catch on with other vendors, but that’s also an uphill battle given the conflict of interest with platform lock in that it represents. Recent versions of Android also seem to be working reasonably well with mobile application integration onto the desktop and activation out of the box. Although it still uses the Apple meta tags to find icons and behavior settings, everything at least works as you would expect – icons to the desktop on pinning, WebView based full screen activation, and reliable application persistence as the browser/app is treated like a real application. Hopefully iOS will at some point provide this same level of rudimentary Web app support. What’s also interesting to me is that Microsoft hasn’t picked up on the obvious need for a solid Web App platform. Being a distant third in the mobile OS war, Microsoft certainly has nothing to lose and everything to gain by using fresh ideas and expanding into areas that the other major vendors are neglecting. But instead Microsoft is trying to beat the market leaders at their own game, fighting on their adversary’s terms instead of taking a new tack. Providing a kick ass mobile Web platform that takes the lead on some of the proposed mobile APIs would be something positive that Microsoft could do to improve its miserable position in the mobile device market. Where are we at with Mobile Web? It sure sounds like I’m really down on the Mobile Web, right? I’ve built a number of mobile apps in the last year and while overall result and response has been very positive to what we were able to accomplish in terms of UI, getting that final 10% that required device integration dialed was an absolute nightmare on every single one of them. Big compromises had to be made and some features were left out or had to be modified for some devices. In two cases we opted to go the Cordova route in order to get the integration we needed, along with the extra pain involved in that process. Unless you’re not integrating with device features and you don’t care deeply about a smooth integration with the mobile desktop, mobile Web development is fraught with frustration. So, yes I’m frustrated! But it’s not for lack of wanting the mobile Web to succeed. I am still a firm believer that we will eventually arrive a much more functional mobile Web platform that allows access to the most common device features in a sensible way. It wouldn't be difficult for device platform vendors to make Web based applications first class citizens on mobile devices. But unfortunately it looks like it will still be some time before this happens. So, what’s your experience building mobile Web apps? Are you finding similar issues? Just giving up on raw Web applications and building PhoneGap apps instead? Completely skipping the Web and going native? Leave a comment for discussion. Resources Rick Strahl on DotNet Rocks talking about Mobile Web© Rick Strahl, West Wind Technologies, 2005-2014Posted in HTML5  Mobile   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • 10 Excellent Icon Sets

    - by Jyoti
    Icons are really useful for web design, application interface and more. Everyone loves good looking icons. In this post you will find 10 fresh new icon packs that you can use for your project. Pixelpress Mixed Icons: Social Media Icons By Studio M6: Now Wooden App Icon: Onebit Icon Pack: Fresh Add On Icon Set: 3D [...]

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >