Search Results

Search found 12753 results on 511 pages for 'small'.

Page 500/511 | < Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >

  • Shrinking TCP Window Size to 0 on Cisco ASA

    - by Brent
    Having an issue with any large file transfer that crosses our Cisco ASA unit come to an eventual pause. Setup Test1: Server A, FileZilla Client <- 1GBPS - Cisco ASA <- 1 GBPS - Server B, FileZilla Server TCP Window size on large transfers will drop to 0 after around 30 seconds of a large file transfer. RDP session then becomes unresponsive for a minute or two and then is sporadic. After a minute or two, the FTP transfer resumes, but at 1-2 MB/s. When the FTP transfer is over, the responsiveness of the RDP session returns to normal. Test2: Server C in same network as Server B, FileZilla Client <- local network - Server B, FileZilla Server File will transfer at 30+ MB/s. Details ASA: 5520 running 8.3(1) with ASDM 6.3(1) Windows: Server 2003 R2 SP2 with latest patches Server: VMs running on HP C3000 blade chasis FileZilla: 3.3.5.1, latest stable build Transfer: 20 GB SQL .BAK file Protocol: Active FTP over tcp/20, tcp/21 Switches: Cisco Small Business 2048 Gigabit running latest 2.0.0.8 VMware: 4.1 HP: Flex-10 3.15, latest version Notes All servers are VMs. Thoughts Pretty sure the ASA is at fault since a transfer between VMs on the same network will not show a shrinking Window size. Our ASA is pretty vanilla. No major changes made to any of the settings. It has a bunch of NAT and ACLs. Wireshark Sample No. Time Source Destination Protocol Info 234905 73.916986 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131981791 Win=65535 Len=0 234906 73.917220 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234907 73.917224 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234908 73.917231 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131984551 Win=64155 Len=0 234909 73.917463 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234910 73.917467 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234911 73.917469 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234912 73.917476 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131988691 Win=60015 Len=0 234913 73.917706 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234914 73.917710 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234915 73.917715 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131991451 Win=57255 Len=0 234916 73.917949 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234917 73.917953 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234918 73.917958 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131994211 Win=54495 Len=0 234919 73.918193 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234920 73.918197 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234921 73.918202 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131996971 Win=51735 Len=0 234922 73.918435 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234923 73.918440 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234924 73.918445 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131999731 Win=48975 Len=0 234925 73.918679 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234926 73.918684 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234927 73.918689 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132002491 Win=46215 Len=0 234928 73.918922 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234929 73.918927 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234930 73.918932 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132005251 Win=43455 Len=0 234931 73.919165 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234932 73.919169 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234933 73.919174 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132008011 Win=40695 Len=0 234934 73.919408 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234935 73.919413 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234936 73.919418 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132010771 Win=37935 Len=0 234937 73.919652 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234938 73.919656 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234939 73.919661 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132013531 Win=35175 Len=0 234940 73.919895 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234941 73.919899 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234942 73.919904 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132016291 Win=32415 Len=0 234943 73.920138 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234944 73.920142 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234945 73.920147 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132019051 Win=29655 Len=0 234946 73.920381 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234947 73.920386 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234948 73.920391 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132021811 Win=26895 Len=0 234949 73.920625 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234950 73.920629 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234951 73.920632 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234952 73.920638 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132025951 Win=22755 Len=0 234953 73.920868 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234954 73.920871 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234955 73.920876 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132028711 Win=19995 Len=0 234956 73.921111 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234957 73.921115 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234958 73.921120 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132031471 Win=17235 Len=0 234959 73.921356 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234960 73.921362 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234961 73.921370 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132034231 Win=14475 Len=0 234962 73.921598 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234963 73.921606 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234964 73.921613 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132036991 Win=11715 Len=0 234965 73.921841 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234966 73.921848 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234967 73.921855 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132039751 Win=8955 Len=0 234968 73.922085 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234969 73.922092 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234970 73.922099 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132042511 Win=6195 Len=0 234971 73.922328 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234972 73.922335 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234973 73.922342 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132045271 Win=3435 Len=0 234974 73.922571 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234975 73.922579 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234976 73.922586 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132048031 Win=675 Len=0 234981 75.866453 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 675 bytes 234985 76.020168 1.1.1.1 2.2.2.2 TCP [TCP ZeroWindow] ftp-data ivecon-port [ACK] Seq=1 Ack=132048706 Win=0 Len=0 234989 76.771633 2.2.2.2 1.1.1.1 TCP [TCP ZeroWindowProbe] ivecon-port ftp-data [ACK] Seq=132048706 Ack=1 Win=65535 Len=1 234990 76.771648 1.1.1.1 2.2.2.2 TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] ftp-data ivecon-port [ACK] Seq=1 Ack=132048706 Win=0 Len=0 234997 78.279701 2.2.2.2 1.1.1.1 TCP [TCP ZeroWindowProbe] ivecon-port ftp-data [ACK] Seq=132048706 Ack=1 Win=65535 Len=1 234998 78.279714 1.1.1.1 2.2.2.2 TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] ftp-data ivecon-port [ACK] Seq=1 Ack=132048706 Win=0 Len=0

    Read the article

  • OpenVPN not connecting

    - by LandArch
    There have been a number of post similar to this, but none seem to satisfy my need. Plus I am a Ubuntu newbie. I followed this tutorial to completely set up OpenVPN on Ubuntu 12.04 server. Here is my server.conf file ################################################# # Sample OpenVPN 2.0 config file for # # multi-client server. # # # # This file is for the server side # # of a many-clients <-> one-server # # OpenVPN configuration. # # # # OpenVPN also supports # # single-machine <-> single-machine # # configurations (See the Examples page # # on the web site for more info). # # # # This config should work on Windows # # or Linux/BSD systems. Remember on # # Windows to quote pathnames and use # # double backslashes, e.g.: # # "C:\\Program Files\\OpenVPN\\config\\foo.key" # # # # Comments are preceded with '#' or ';' # ################################################# # Which local IP address should OpenVPN # listen on? (optional) local 192.168.13.8 # Which TCP/UDP port should OpenVPN listen on? # If you want to run multiple OpenVPN instances # on the same machine, use a different port # number for each one. You will need to # open up this port on your firewall. port 1194 # TCP or UDP server? proto tcp ;proto udp # "dev tun" will create a routed IP tunnel, # "dev tap" will create an ethernet tunnel. # Use "dev tap0" if you are ethernet bridging # and have precreated a tap0 virtual interface # and bridged it with your ethernet interface. # If you want to control access policies # over the VPN, you must create firewall # rules for the the TUN/TAP interface. # On non-Windows systems, you can give # an explicit unit number, such as tun0. # On Windows, use "dev-node" for this. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. dev tap0 up "/etc/openvpn/up.sh br0" down "/etc/openvpn/down.sh br0" ;dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel if you # have more than one. On XP SP2 or higher, # you may need to selectively disable the # Windows firewall for the TAP adapter. # Non-Windows systems usually don't need this. ;dev-node MyTap # SSL/TLS root certificate (ca), certificate # (cert), and private key (key). Each client # and the server must have their own cert and # key file. The server and all clients will # use the same ca file. # # See the "easy-rsa" directory for a series # of scripts for generating RSA certificates # and private keys. Remember to use # a unique Common Name for the server # and each of the client certificates. # # Any X509 key management system can be used. # OpenVPN can also use a PKCS #12 formatted key file # (see "pkcs12" directive in man page). ca "/etc/openvpn/ca.crt" cert "/etc/openvpn/server.crt" key "/etc/openvpn/server.key" # This file should be kept secret # Diffie hellman parameters. # Generate your own with: # openssl dhparam -out dh1024.pem 1024 # Substitute 2048 for 1024 if you are using # 2048 bit keys. dh dh1024.pem # Configure server mode and supply a VPN subnet # for OpenVPN to draw client addresses from. # The server will take 10.8.0.1 for itself, # the rest will be made available to clients. # Each client will be able to reach the server # on 10.8.0.1. Comment this line out if you are # ethernet bridging. See the man page for more info. ;server 10.8.0.0 255.255.255.0 # Maintain a record of client <-> virtual IP address # associations in this file. If OpenVPN goes down or # is restarted, reconnecting clients can be assigned # the same virtual IP address from the pool that was # previously assigned. ifconfig-pool-persist ipp.txt # Configure server mode for ethernet bridging. # You must first use your OS's bridging capability # to bridge the TAP interface with the ethernet # NIC interface. Then you must manually set the # IP/netmask on the bridge interface, here we # assume 10.8.0.4/255.255.255.0. Finally we # must set aside an IP range in this subnet # (start=10.8.0.50 end=10.8.0.100) to allocate # to connecting clients. Leave this line commented # out unless you are ethernet bridging. server-bridge 192.168.13.101 255.255.255.0 192.168.13.105 192.168.13.200 # Configure server mode for ethernet bridging # using a DHCP-proxy, where clients talk # to the OpenVPN server-side DHCP server # to receive their IP address allocation # and DNS server addresses. You must first use # your OS's bridging capability to bridge the TAP # interface with the ethernet NIC interface. # Note: this mode only works on clients (such as # Windows), where the client-side TAP adapter is # bound to a DHCP client. ;server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. push "route 192.168.13.1 255.255.255.0" push "dhcp-option DNS 192.168.13.201" push "dhcp-option DOMAIN blahblah.dyndns-wiki.com" ;push "route 192.168.20.0 255.255.255.0" # To assign specific IP addresses to specific # clients or if a connecting client has a private # subnet behind it that should also have VPN access, # use the subdirectory "ccd" for client-specific # configuration files (see man page for more info). # EXAMPLE: Suppose the client # having the certificate common name "Thelonious" # also has a small subnet behind his connecting # machine, such as 192.168.40.128/255.255.255.248. # First, uncomment out these lines: ;client-config-dir ccd ;route 192.168.40.128 255.255.255.248 # Then create a file ccd/Thelonious with this line: # iroute 192.168.40.128 255.255.255.248 # This will allow Thelonious' private subnet to # access the VPN. This example will only work # if you are routing, not bridging, i.e. you are # using "dev tun" and "server" directives. # EXAMPLE: Suppose you want to give # Thelonious a fixed VPN IP address of 10.9.0.1. # First uncomment out these lines: ;client-config-dir ccd ;route 10.9.0.0 255.255.255.252 # Then add this line to ccd/Thelonious: # ifconfig-push 10.9.0.1 10.9.0.2 # Suppose that you want to enable different # firewall access policies for different groups # of clients. There are two methods: # (1) Run multiple OpenVPN daemons, one for each # group, and firewall the TUN/TAP interface # for each group/daemon appropriately. # (2) (Advanced) Create a script to dynamically # modify the firewall in response to access # from different clients. See man # page for more info on learn-address script. ;learn-address ./script # If enabled, this directive will configure # all clients to redirect their default # network gateway through the VPN, causing # all IP traffic such as web browsing and # and DNS lookups to go through the VPN # (The OpenVPN server machine may need to NAT # or bridge the TUN/TAP interface to the internet # in order for this to work properly). ;push "redirect-gateway def1 bypass-dhcp" # Certain Windows-specific network settings # can be pushed to clients, such as DNS # or WINS server addresses. CAVEAT: # http://openvpn.net/faq.html#dhcpcaveats # The addresses below refer to the public # DNS servers provided by opendns.com. ;push "dhcp-option DNS 208.67.222.222" ;push "dhcp-option DNS 208.67.220.220" # Uncomment this directive to allow different # clients to be able to "see" each other. # By default, clients will only see the server. # To force clients to only see the server, you # will also need to appropriately firewall the # server's TUN/TAP interface. ;client-to-client # Uncomment this directive if multiple clients # might connect with the same certificate/key # files or common names. This is recommended # only for testing purposes. For production use, # each client should have its own certificate/key # pair. # # IF YOU HAVE NOT GENERATED INDIVIDUAL # CERTIFICATE/KEY PAIRS FOR EACH CLIENT, # EACH HAVING ITS OWN UNIQUE "COMMON NAME", # UNCOMMENT THIS LINE OUT. ;duplicate-cn # The keepalive directive causes ping-like # messages to be sent back and forth over # the link so that each side knows when # the other side has gone down. # Ping every 10 seconds, assume that remote # peer is down if no ping received during # a 120 second time period. keepalive 10 120 # For extra security beyond that provided # by SSL/TLS, create an "HMAC firewall" # to help block DoS attacks and UDP port flooding. # # Generate with: # openvpn --genkey --secret ta.key # # The server and each client must have # a copy of this key. # The second parameter should be '0' # on the server and '1' on the clients. ;tls-auth ta.key 0 # This file is secret # Select a cryptographic cipher. # This config item must be copied to # the client config file as well. ;cipher BF-CBC # Blowfish (default) ;cipher AES-128-CBC # AES ;cipher DES-EDE3-CBC # Triple-DES # Enable compression on the VPN link. # If you enable it here, you must also # enable it in the client config file. comp-lzo # The maximum number of concurrently connected # clients we want to allow. ;max-clients 100 # It's a good idea to reduce the OpenVPN # daemon's privileges after initialization. # # You can uncomment this out on # non-Windows systems. user nobody group nogroup # The persist options will try to avoid # accessing certain resources on restart # that may no longer be accessible because # of the privilege downgrade. persist-key persist-tun # Output a short status file showing # current connections, truncated # and rewritten every minute. status openvpn-status.log # By default, log messages will go to the syslog (or # on Windows, if running as a service, they will go to # the "\Program Files\OpenVPN\log" directory). # Use log or log-append to override this default. # "log" will truncate the log file on OpenVPN startup, # while "log-append" will append to it. Use one # or the other (but not both). ;log openvpn.log ;log-append openvpn.log # Set the appropriate level of log # file verbosity. # # 0 is silent, except for fatal errors # 4 is reasonable for general usage # 5 and 6 can help to debug connection problems # 9 is extremely verbose verb 3 # Silence repeating messages. At most 20 # sequential messages of the same message # category will be output to the log. ;mute 20 I am using Windows 7 as the Client and set that up accordingly using the OpenVPN GUI. That conf file is as follows: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. dev tap0 up "/etc/openvpn/up.sh br0" down "/etc/openvpn/down.sh br0" ;dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. proto tcp ;proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. blahblah.dyndns-wiki.com 1194 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) user nobody group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca "C:\\Program Files\OpenVPN\config\\ca.crt" cert "C:\\Program Files\OpenVPN\config\\ChadMWade-THINK.crt" key "C:\\Program Files\OpenVPN\config\\ChadMWade-THINK.key" # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 Not sure whats left to do.

    Read the article

  • SCCM 2012 unable to update boot images with pxe enabled

    - by Adam
    we are fighting an error in sccm 2012. When we attempt to distribute boot images (after selecting the pxe option) we receive an error that the pxe image cannot be expanded (distmgr log). Can you give us any direction on what to try or attempt in this scenario? We only have one dp in our environment at the moment, however we have found that by creating another dp on a different server we don’t have this problem. However we really need the primary site to be a dp. We have tried: Removing and reinstalling the dp Removing and reinstalling the WDS Reinstalled the OS ... ouch Reinstalled SQL We even attempted to manually mount these wims in the remote install folder, no luck... And we have been working on this for days... Any and all help is appreciated! Our log is below Thank you very much, Small Town IT guy Attempting to add or update a package on a distribution point. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) The distribution point is on the siteserver and the package is a content type package. There is nothing to be copied over. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) STATMSG: ID=2342 SEV=I LEV=M SOURCE="SMS Server" COMP="SMS_DISTRIBUTION_MANAGER" SYS=OURSERVER.ourdomain.cc.ia.us SITE=IVC PID=3600 TID=6924 GMTDATE=Fri Jun 22 19:49:41.559 2012 ISTR0="Boot image (x86)" ISTR1="["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=2 AID0=400 AVAL0="IVC00001" AID1=404 AVAL1="["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\" SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) The current user context will be used for connecting to ["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) No network connection is needed to ["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\ as this is the local machine. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) Signature share exists on distribution point path \OURSERVER.ourdomain.cc.ia.us\SMSSIG$ SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) Ignoring drive C:. File C:\NO_SMS_ON_DRIVE.SMS exists. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) user(NT AUTHORITY\SYSTEM) runing application(SMS_DISTRIBUTION_MANAGER) from machine (OURSERVER.ourdomain.cc.ia.us) is submitting SDK changes from site(IVC) SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) Share SMSPKGD$ exists on distribution point \OURSERVER.ourdomain.cc.ia.us\SMSPKGD$ SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) Creating, reading and or updating Operations Management server role registry keys for a Distribution Point ... SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) user(NT AUTHORITY\SYSTEM) runing application(SMS_DISTRIBUTION_MANAGER) from machine (OURSERVER.ourdomain.cc.ia.us) is submitting SDK changes from site(IVC) SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) Creating, reading or updating IIS registry key for a distribution point. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) IISPortsList in the SCF is "80". SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) IISSSLPortsList in the SCF is "443". SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) IISWebSiteName in the SCF is "". SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) IISSSLState in the SCF is 224. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) Virtual Directory SMS_DP_SMSPKG$ for the physical path F:\SCCMContentLib already exists. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) STATMSG: ID=2375 SEV=I LEV=M SOURCE="SMS Server" COMP="SMS_DISTRIBUTION_MANAGER" SYS=OURSERVER.ourdomain.cc.ia.us SITE=IVC PID=3600 TID=6924 GMTDATE=Fri Jun 22 19:49:42.058 2012 ISTR0="["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\" ISTR1="" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=1 AID0=404 AVAL0="["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\" SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) Creating, reading or updating IIS registry key for a distribution point. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) IISPortsList in the SCF is "80". SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) IISSSLPortsList in the SCF is "443". SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) IISWebSiteName in the SCF is "". SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) IISSSLState in the SCF is 224. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) Virtual Directory SMS_DP_SMSSIG$ for the physical path D:\SMSSIG$ already exists. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) STATMSG: ID=2375 SEV=I LEV=M SOURCE="SMS Server" COMP="SMS_DISTRIBUTION_MANAGER" SYS=OURSERVER.ourdomain.cc.ia.us SITE=IVC PID=3600 TID=6924 GMTDATE=Fri Jun 22 19:49:42.105 2012 ISTR0="["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\" ISTR1="" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=1 AID0=404 AVAL0="["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\" SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) user(NT AUTHORITY\SYSTEM) runing application(SMS_DISTRIBUTION_MANAGER) from machine (OURSERVER.ourdomain.cc.ia.us) is submitting SDK changes from site(IVC) SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) RDC:Successfully created package signature file from \?\F:\SMSPKGSIG\IVC00001.3 to \OURSERVER.ourdomain.cc.ia.us\SMSSIG$\IVC00001.3.tar SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) Setting permissions on file MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\SMSSIG$\IVC00001.3.tar. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) ExpandPXEImage: IVC00001, 1024 SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) CContentDefinition::GetFileProperties failed; 0x80070003 SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) CContentDefinition::TotalFileSizes failed; 0x80070003 SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) ExpandPXEImage failed; 0x80070003 SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) Error occurred. Performing error cleanup prior to returning. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) DP thread with array index 0 ended. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) DP thread with thread handle 00000000000013A4 and thread ID 6924 ended. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) Updating package info for package IVC00001 SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) Package IVC00001 does not have a preferred sender. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) The package and/or program properties for package IVC00001 have not changed, need to determine which site(s) need updated package info. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) StoredPkgVersion (3) of package IVC00001. StoredPkgVersion in database is 3. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) SourceVersion (3) of package IVC00001. SourceVersion in database is 3. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) STATMSG: ID=2302 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_DISTRIBUTION_MANAGER" SYS=OURSERVER.ourdomain.cc.ia.us SITE=IVC PID=3600 TID=4492 GMTDATE=Fri Jun 22 19:49:42.292 2012 ISTR0="Boot image (x86)" ISTR1="IVC00001" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=1 AID0=400 AVAL0="IVC00001" SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) Failed to process package IVC00001 after 0 retries, will retry 100 more times SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) Exiting package processing thread. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C)

    Read the article

  • Jolicloud is a Nifty New OS for Your Netbook

    - by Matthew Guay
    Want to breathe new life into your netbook?  Here’s a quick look at Jolicloud, a unique new Linux based OS that lets you use your netbook in a whole new way. Netbooks have been an interesting category of computers.  When they were first released, most netbooks came with a stripped down Linux based operating system designed to let you easily access the internet first and foremost.  Consumers wanted more from their netbooks, so full OSes such as Windows XP and Ubuntu became the standard on netbooks.  Microsoft worked hard to get Windows 7 working great on netbooks, and today most netbooks run Windows 7 great.  But the Linux community hasn’t stood still either, and Jolicloud is proof of that.  Jolicloud is a unique OS designed to bring the best of both webapps and standard programs to your netbook.   Keep reading to see if this is the perfect netbook OS for you. Getting Started Installing Jolicloud on your netbook is easy thanks to a the Jolicloud Express installer for Windows.  Since many netbooks run Windows by default, this makes it easy to install Jolicloud.  Plus, your Windows install is left untouched, so you can still easily access all your Windows files and programs. Download and run the roughly 700Mb installer (link below) just as a normal installer in Windows. This will first extract the needed files. Click Get started to install Jolicloud on your netbook. Enter a username, password, and nickname for your computer.  Please note that the username must be all lowercase, and the nickname should not contain spaces or special characters.   Now you can review the default installation settings.  By default it will take up 39Gb and install on your C:\ drive in English.  If you wish to change this, click Change. We chose to install it on the D: drive on this netbook, as its harddrive was already partitioned into two parts.  Click Save when your settings are all correct, and then click Next in the previous window. Jolicloud will prepare for the installation.  This took about 5 minutes in our test.  Click Next when this is finished. Click Restart now to install and run Jolicloud. When your netbook reboots, it will initialize the Jolicloud setup. It will then automatically finish the installation.  Just sit back and wait; there’s nothing for you to do right now.  The installation took about 20 minutes in our test. Jolicloud will automatically reboot when the setup is finished. Once it’s rebooted, you’re ready to go!  Enter the username, then the password, that you chose earlier when you were installing Jolicloud from Windows. Welcome to your Jolicloud desktop! Hardware Support We installed Jolicloud on a Samsung N150 netbook with an Atom N450 processor, 1Gb Ram, 250Gb harddrive, and WiFi b/g/n with Bluetooth.  Amazingly, once Jolicloud was installed, everything was ready to use.  No drivers to install, no settings to hassle with, it was all installed and set up perfectly.  Power settings worked great, and closing the netbook put it to sleep just like in Windows. WiFi drivers have typically been difficult to find and install on Linux, but Jolicloud had our netbook’s wifi working immediately.  To get online, simply click the Wireless icon on the top right, and select the wireless network you want to connect to. Jolicloud will let you know when it is signed on. Wired Lan networking was also seamless; simply connect your cable and you’re ready to go.  The webcam and touchpad also worked perfectly directly.  The only thing missing was multitouch; this touchpad has two finger scroll, pinch zoom, and other nice multitouch features in Windows, but in Julicloud it only functioned as a standard touchpad.  It did have tap to click activated by default, as well as right-side scrolling, which is nice. Jolicloud also supported our video card without any extra work.  The native resolution was already selected, and the only problem we had with the screen was that there was no apparent way to change the brightness.  This is not a major problem, but would be nice to have.  The Samsung N150 has Intel GMA3150 integrated graphics, and Jolicloud promises 1080p HD video on it.  It did playback 720p H.264 video flawlessly without installing anything extra, but it stuttered on full 1080p HD (which is the exact same as this netbook’s video playback in Windows 7 – 720p works great, but it stutters on 1080p).  We would be excited to see full HD on this netbook, but 720p is definitely fine for most stuff.   Jolicloud supports a wide range of netbooks, and based on our experience we would expect it to work as good on any supported hardware.  Check out the list of supported netbooks to see if your netbook is supported; if not, it still may work but you may have to install special drivers. Jolicloud’s performance was very similar to Windows 7 on our netbook.  It boots in about 30 seconds, and apps load fairly quickly.  In general, we couldn’t tell much difference in performance between Jolicloud and Windows 7, though this isn’t a problem since Windows 7 runs great on the current generation of netbooks. Using Jolicloud Ready to start putting Jolicloud to use?  Your fresh Jolicloud install you can run several built-in apps, such as Firefox, a calculator, and the chat client Pidgin.  It also has a media player and file viewer installed, so you can play MP3s or MPG videos, or read PDF ebooks without installing anything extra.  It also has Flash player installed so you can watch videos online easily. You can also directly access all of your files from the right side of your home screen.  You can even access your Windows files; in our test, the 116.9 GB Media was C: from Windows.  Select it to browse and open any file you had saved in Windows. You may need to enter your password to access it. Once you’re authenticated it, you’ll see all of your Windows files and folders.  Your User files (Documents, Music, Videos, etc.) will be in the Users folder. And, you can easily add files from removable media such as USB flash drives and memory cards.  Jolicloud recognized a flash drive we tested with no trouble at all. Add new apps But, the best part about Jolicloud is that it makes it very easy to install new apps.  Click the Get Started button on your homescreen. You’ll first need to create an account.  You can then use this same account on another netbook if you wish, and your settings will automatically be synced between the two. You can either signup using your Facebook account, …or you can sign up the traditional way with your email address, name, and password.  If you sign up this way, you will need to confirm your email address before your account will be finished. Now, choose your netbook model from the list, and enter a name for your computer. And that’s it!  You’ll now see the Jolicloud dashboard, which will show you updates and notifications from friends who also use Jolicloud. Click the App directory to find new apps for your netbook.  Here you will find a variety of webapps, such as Gmail, along with native applications, such as Skype, that you can install on your netbook.  Simply click the Install button on the right to add the app to your netbook. You will be prompted to enter your system password, and then the app will install without any further input.   Once an app is installed, a check mark will appear beside its name.  You can remove it by clicking the Remove button, and it will uninstall seamlessly. Webapps, such as Gmail, actually run in in a Chrome-powered window that lets the webapp run full screen.  This gives the webapps a native feel, but actually they’re just running the same as they would in a standard web browser.   The Jolicloud Interface Most apps run maximized, and there is no way to run them smaller.  This in general works good, since with small screens most apps need to run full-screen anyhow. Smaller apps, such as a calculator or the Pidgin chat client, run in a window just like they do on other operating systems. You can switch to another app that’s running by selecting it’s icon on the top left, or you can go back to the home screen by clicking the home screen.  If you’re finished with an program, simply click the red X button on the top right of the window when you’re running it. Or, you can switch between programs using standard keyboard shortcuts such as Alt-tab. The default page on the home screen is the favorites page, and all of your other programs are orginized in their own sections on the left hand side.  But, if you want to add one of these to your favorites page, simply right-click on it and select Add to Favorites. When you’re done for the day, you can simply close your netbook to put it to sleep.  Or, if you want to shut down, just press the Quit button on the bottom right of the home screen and then select Shut Down. Booting Jolicloud When you install Jolicloud, it will set itself as the default operating system.  Now, when you boot your netbook, it will show you a list of installed operating systems.  You can select either Windows or Jolicloud, but if you don’t make a selection it will boot into Jolicloud after waiting 10 seconds. If you’d perfer to boot into Windows by default, you can easily change this.  First, boot your netbook in to Windows.  Open the start menu, right-click on the Computer button, and select Properties.   Click the “Advanced system settings” link on the left side. Click the Settings button in the Startup and Recovery section. Now, select Windows as the default operating system, and click Ok.  Your netbook will now boot into Windows by default, but will give you 10 seconds to choose to boot into Jolicloud when you start your computer. Or, if you decided you don’t want Jolicloud, you can easily uninstall it from within Windows. Please note that this will also remove any files you may have saved in Jolicloud, so be sure to copy them to your Windows drive before uninstalling. To uninstall Jolicloud from within Windows, open Control Panel, and select Uninstall a Program. Scroll down to select Jolicloud, and click Uninstall/Change. Click Yes to confirm that you want to uninstall Jolicloud. After a few moments, it will let you know that Jolicloud has been uninstalled.  You’re netbook is now back the same as it was before you installed Jolicloud, with only Windows installed. Closing Whether you’re wanting to replace your current OS on your netbook or would simply like to try out a fresh new Linux version on your netbook, Jolicloud is a great option for you.  We were very impressed by it’s solid hardware support and the ease of installing new apps in Jolicloud.  Rather than simply giving us a standard OS, Jolicloud offers a unique way to use your netbook with native programs and webapps.  And whether you’re an IT pro or are a new computer user, Jolicloud was easy enough to use that anyone can do it.  Give it a try, and let us know what your favorite netbook OS is! Link Download Jolicloud for your netbook Similar Articles Productive Geek Tips How To Change XSplash Themes in Ubuntu 9.10Verify the Integrity of Windows Vista System FilesMonitor Multiple Logs in a Single Shell with MultiTail for LinuxHide Some or All of the GUI Bars in FirefoxAsk the Readers: Do You Use a Laptop, Desktop, or Both? TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Stop In The Name Of Love (Firefox addon) Chitika iPad Labs Gives Live iPad Sale Stats Heaven & Hell Finder Icon Using TrueCrypt to Secure Your Data Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Working with PivotTables in Excel

    - by Mark Virtue
    PivotTables are one of the most powerful features of Microsoft Excel.  They allow large amounts of data to be analyzed and summarized in just a few mouse clicks. In this article, we explore PivotTables, understand what they are, and learn how to create and customize them. Note:  This article is written using Excel 2010 (Beta).  The concept of a PivotTable has changed little over the years, but the method of creating one has changed in nearly every iteration of Excel.  If you are using a version of Excel that is not 2010, expect different screens from the ones you see in this article. A Little History In the early days of spreadsheet programs, Lotus 1-2-3 ruled the roost.  Its dominance was so complete that people thought it was a waste of time for Microsoft to bother developing their own spreadsheet software (Excel) to compete with Lotus.  Flash-forward to 2010, and Excel’s dominance of the spreadsheet market is greater than Lotus’s ever was, while the number of users still running Lotus 1-2-3 is approaching zero.  How did this happen?  What caused such a dramatic reversal of fortunes? Industry analysts put it down to two factors:  Firstly, Lotus decided that this fancy new GUI platform called “Windows” was a passing fad that would never take off.  They declined to create a Windows version of Lotus 1-2-3 (for a few years, anyway), predicting that their DOS version of the software was all anyone would ever need.  Microsoft, naturally, developed Excel exclusively for Windows.  Secondly, Microsoft developed a feature for Excel that Lotus didn’t provide in 1-2-3, namely PivotTables.  The PivotTables feature, exclusive to Excel, was deemed so staggeringly useful that people were willing to learn an entire new software package (Excel) rather than stick with a program (1-2-3) that didn’t have it.  This one feature, along with the misjudgment of the success of Windows, was the death-knell for Lotus 1-2-3, and the beginning of the success of Microsoft Excel. Understanding PivotTables So what is a PivotTable, exactly? Put simply, a PivotTable is a summary of some data, created to allow easy analysis of said data.  But unlike a manually created summary, Excel PivotTables are interactive.  Once you have created one, you can easily change it if it doesn’t offer the exact insights into your data that you were hoping for.  In a couple of clicks the summary can be “pivoted” – rotated in such a way that the column headings become row headings, and vice versa.  There’s a lot more that can be done, too.  Rather than try to describe all the features of PivotTables, we’ll simply demonstrate them… The data that you analyze using a PivotTable can’t be just any data – it has to be raw data, previously unprocessed (unsummarized) – typically a list of some sort.  An example of this might be the list of sales transactions in a company for the past six months. Examine the data shown below: Notice that this is not raw data.  In fact, it is already a summary of some sort.  In cell B3 we can see $30,000, which apparently is the total of James Cook’s sales for the month of January.  So where is the raw data?  How did we arrive at the figure of $30,000?  Where is the original list of sales transactions that this figure was generated from?  It’s clear that somewhere, someone must have gone to the trouble of collating all of the sales transactions for the past six months into the summary we see above.  How long do you suppose this took?  An hour?  Ten?  Probably. If we were to track down the original list of sales transactions, it might look something like this: You may be surprised to learn that, using the PivotTable feature of Excel, we can create a monthly sales summary similar to the one above in a few seconds, with only a few mouse clicks.  We can do this – and a lot more too! How to Create a PivotTable First, ensure that you have some raw data in a worksheet in Excel.  A list of financial transactions is typical, but it can be a list of just about anything:  Employee contact details, your CD collection, or fuel consumption figures for your company’s fleet of cars. So we start Excel… …and we load such a list… Once we have the list open in Excel, we’re ready to start creating the PivotTable. Click on any one single cell within the list: Then, from the Insert tab, click the PivotTable icon: The Create PivotTable box appears, asking you two questions:  What data should your new PivotTable be based on, and where should it be created?  Because we already clicked on a cell within the list (in the step above), the entire list surrounding that cell is already selected for us ($A$1:$G$88 on the Payments sheet, in this example).  Note that we could select a list in any other region of any other worksheet, or even some external data source, such as an Access database table, or even a MS-SQL Server database table.  We also need to select whether we want our new PivotTable to be created on a new worksheet, or on an existing one.  In this example we will select a new one: The new worksheet is created for us, and a blank PivotTable is created on that worksheet: Another box also appears:  The PivotTable Field List.  This field list will be shown whenever we click on any cell within the PivotTable (above): The list of fields in the top part of the box is actually the collection of column headings from the original raw data worksheet.  The four blank boxes in the lower part of the screen allow us to choose the way we would like our PivotTable to summarize the raw data.  So far, there is nothing in those boxes, so the PivotTable is blank.  All we need to do is drag fields down from the list above and drop them in the lower boxes.  A PivotTable is then automatically created to match our instructions.  If we get it wrong, we only need to drag the fields back to where they came from and/or drag new fields down to replace them. The Values box is arguably the most important of the four.  The field that is dragged into this box represents the data that needs to be summarized in some way (by summing, averaging, finding the maximum, minimum, etc).  It is almost always numerical data.  A perfect candidate for this box in our sample data is the “Amount” field/column.  Let’s drag that field into the Values box: Notice that (a) the “Amount” field in the list of fields is now ticked, and “Sum of Amount” has been added to the Values box, indicating that the amount column has been summed. If we examine the PivotTable itself, we indeed find the sum of all the “Amount” values from the raw data worksheet: We’ve created our first PivotTable!  Handy, but not particularly impressive.  It’s likely that we need a little more insight into our data than that. Referring to our sample data, we need to identify one or more column headings that we could conceivably use to split this total.  For example, we may decide that we would like to see a summary of our data where we have a row heading for each of the different salespersons in our company, and a total for each.  To achieve this, all we need to do is to drag the “Salesperson” field into the Row Labels box: Now, finally, things start to get interesting!  Our PivotTable starts to take shape….   With a couple of clicks we have created a table that would have taken a long time to do manually. So what else can we do?  Well, in one sense our PivotTable is complete.  We’ve created a useful summary of our source data.  The important stuff is already learned!  For the rest of the article, we will examine some ways that more complex PivotTables can be created, and ways that those PivotTables can be customized. First, we can create a two-dimensional table.  Let’s do that by using “Payment Method” as a column heading.  Simply drag the “Payment Method” heading to the Column Labels box: Which looks like this: Starting to get very cool! Let’s make it a three-dimensional table.  What could such a table possibly look like?  Well, let’s see… Drag the “Package” column/heading to the Report Filter box: Notice where it ends up…. This allows us to filter our report based on which “holiday package” was being purchased.  For example, we can see the breakdown of salesperson vs payment method for all packages, or, with a couple of clicks, change it to show the same breakdown for the “Sunseekers” package: And so, if you think about it the right way, our PivotTable is now three-dimensional.  Let’s keep customizing… If it turns out, say, that we only want to see cheque and credit card transactions (i.e. no cash transactions), then we can deselect the “Cash” item from the column headings.  Click the drop-down arrow next to Column Labels, and untick “Cash”: Let’s see what that looks like…As you can see, “Cash” is gone. Formatting This is obviously a very powerful system, but so far the results look very plain and boring.  For a start, the numbers that we’re summing do not look like dollar amounts – just plain old numbers.  Let’s rectify that. A temptation might be to do what we’re used to doing in such circumstances and simply select the whole table (or the whole worksheet) and use the standard number formatting buttons on the toolbar to complete the formatting.  The problem with that approach is that if you ever change the structure of the PivotTable in the future (which is 99% likely), then those number formats will be lost.  We need a way that will make them (semi-)permanent. First, we locate the “Sum of Amount” entry in the Values box, and click on it.  A menu appears.  We select Value Field Settings… from the menu: The Value Field Settings box appears. Click the Number Format button, and the standard Format Cells box appears: From the Category list, select (say) Accounting, and drop the number of decimal places to 0.  Click OK a few times to get back to the PivotTable… As you can see, the numbers have been correctly formatted as dollar amounts. While we’re on the subject of formatting, let’s format the entire PivotTable.  There are a few ways to do this.  Let’s use a simple one… Click the PivotTable Tools/Design tab: Then drop down the arrow in the bottom-right of the PivotTable Styles list to see a vast collection of built-in styles: Choose any one that appeals, and look at the result in your PivotTable:   Other Options We can work with dates as well.  Now usually, there are many, many dates in a transaction list such as the one we started with.  But Excel provides the option to group data items together by day, week, month, year, etc.  Let’s see how this is done. First, let’s remove the “Payment Method” column from the Column Labels box (simply drag it back up to the field list), and replace it with the “Date Booked” column: As you can see, this makes our PivotTable instantly useless, giving us one column for each date that a transaction occurred on – a very wide table! To fix this, right-click on any date and select Group… from the context-menu: The grouping box appears.  We select Months and click OK: Voila!  A much more useful table: (Incidentally, this table is virtually identical to the one shown at the beginning of this article – the original sales summary that was created manually.) Another cool thing to be aware of is that you can have more than one set of row headings (or column headings): …which looks like this…. You can do a similar thing with column headings (or even report filters). Keeping things simple again, let’s see how to plot averaged values, rather than summed values. First, click on “Sum of Amount”, and select Value Field Settings… from the context-menu that appears: In the Summarize value field by list in the Value Field Settings box, select Average: While we’re here, let’s change the Custom Name, from “Average of Amount” to something a little more concise.  Type in something like “Avg”: Click OK, and see what it looks like.  Notice that all the values change from summed totals to averages, and the table title (top-left cell) has changed to “Avg”: If we like, we can even have sums, averages and counts (counts = how many sales there were) all on the same PivotTable! Here are the steps to get something like that in place (starting from a blank PivotTable): Drag “Salesperson” into the Column Labels Drag “Amount” field down into the Values box three times For the first “Amount” field, change its custom name to “Total” and it’s number format to Accounting (0 decimal places) For the second “Amount” field, change its custom name to “Average”, its function to Average and it’s number format to Accounting (0 decimal places) For the third “Amount” field, change its name to “Count” and its function to Count Drag the automatically created field from Column Labels to Row Labels Here’s what we end up with: Total, average and count on the same PivotTable! Conclusion There are many, many more features and options for PivotTables created by Microsoft Excel – far too many to list in an article like this.  To fully cover the potential of PivotTables, a small book (or a large website) would be required.  Brave and/or geeky readers can explore PivotTables further quite easily:  Simply right-click on just about everything, and see what options become available to you.  There are also the two ribbon-tabs: PivotTable Tools/Options and Design.  It doesn’t matter if you make a mistake – it’s easy to delete the PivotTable and start again – a possibility old DOS users of Lotus 1-2-3 never had. We’ve included an Excel that should work with most versions of Excel, so you can download to practice your PivotTable skills. Download Our Practice Excel File Similar Articles Productive Geek Tips Magnify Selected Cells In Excel 2007Share Access Data with Excel in Office 2010Make Excel 2007 Print Gridlines In Workbook FileMake Excel 2007 Always Save in Excel 2003 FormatConvert Older Excel Documents to Excel 2007 Format TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer

    Read the article

  • CodePlex Daily Summary for Tuesday, March 16, 2010

    CodePlex Daily Summary for Tuesday, March 16, 2010New ProjectsAAPL-MySQL (a MySql implementation of the Agile ADO.Net Persistence Layer): Using the code conventions and code of the Agile ADO.Net Persistence Layer to build a MySql implementationAddress Book: Address Book is simple and easy to use application for storing and editing contacts. It has many features like search and grouping. It's developed ...Airplanes: Airplanes GameAJAX Fax document viewer: The AJAX Fax document viewer is an ASP.net 4.0 project which uses a Seadragon control to display Tiff/Tif files inside the browser. Since it's base...AxeFrog Core: This project contains the foundational code used in several other projects.Bolão: O objetivo deste projeto é estimular a troca de informações e experiências sobre arquitetura e desenvolvimento de software na prática, através do d...Caramel Engine: This is designed to be a logic engine for developing a wide variety of games. To include card logic, dice logic, territories, players, and even man...DotNetNuke® Skin BrandWorks: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by M Perakakis & M Siganou (e-bilab). A very minimal look sk...DotNetNuke® Skin Reasonable: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by Ralph Williams of Architech Solutions. A clean, classy, professi...DotNetNuke® Skin Seasons: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by Atul Handa and Kevin McCusker of Mercer. This skin is a generic ...File Eraser: File Eraser make it esasier for IT Administrator or Advanced Users to administer files and eliminate long addresses, even UNC. It's developed in VB...GreenEyes: My project for IT personalHulu Launcher: Hulu Launcher is a simple Windows Media Center add-in that attempts to launch Hulu Desktop and manage the windows as seamlessly as possible.Imenik: Imenik makes it easier for users to organise their contacts!KharaPOS: KharaPOS is an Exemplar application for Silverlight 4, WCF RIA Services and .NET Framework 4.Mantas Cryptography: Pequena biblioteca de criptografia com suporte aos algorítmos DES, RC2, Rexor e TripleDES. Gera hashes HMAC-MD5, HMAC-RIPEMD160, HMAC-SHA (SHA1, S...MapWindow3D: A C# DirectX library that extends the MapWindow 6.0 geospatial software library by adding an external map. The map supports rotation and tilting i...Microsoft Silverlight Analytics Framework: Extensible Web Analytics Framework for Microsoft Silverlight Applications.Moq Examples: Unit tests demonstrating Moq features.Ncqrs: Framework that helps you to create Command-Query Responsibility Segregation based applications easily.NoLedge - Notetaking and knowledge database: NoLedge is an easy knowledge gathering and notetaking freeware with a simple interface. You can link a note with different titles and can retrieve ...Numina Application Framework: The framework is a set of tools that help facilitate authentication, authorization, and access control. It is much more than a SSO. It is a central...OData SDK for PHP: OData SDK for PHP is a library to facilitate the connection with OData Services.patterns & practices - Windows Azure Guidance: p&p site for Windows Azure related guidance.projecteuler.net: Exploring projecteuler.net using F#ResumeTracker: Small and easy to use tool that helps to track your job applications. To report bugs or for suggestions please email kirunchik@gmail.com.Selection Maker: Have you ever create a collection of music files? Imagine you just want to pick best of them by your choice. you should go to several folder,play...ShureValidation: Multilingual model validation library. Supports fluent validations, attribute validations and own custom validations.Simple Phonebook: Simple phonebook allows you to store contacts. It also allows you to export the contacts to .txt or .csv. This application is written in C#, but it...SoftUpd: A usefull library which provides an Update feature to any .Net software.sTASKedit: This program can modify and export Perfect World tasks.data files...Tigie: Tigie is a simple CMS system for basic website. It's simple, easy to customize. you'll have a very basic cms to start with and expand for it. All c...toapp: ap hwUltiLogger: UltiLogger is a fast, lightweight logging library programmed in C#. It is meant as a fast, easy, and efficient way for developers to access a relia...Unnme: UnnmeVisual Studio 2008 NUnit snippets: A simple set of useful NUnit snippets, for Visual studio 2008.webdama: italian checkers game c#XBrowser - Headless Browser for .Net: XBrowser is a "headless" web browser written for .Net applications using C#. It is designed to allow automated, remote controlled and test-based br...XML Integrator: XML integration, collaborative toolNew ReleasesAddress Book: Address Book: Address BookAddress Book: Address Book - Source: Address Book source code.AJAX Fax document viewer: AJAXTiff_Source 1.0: Source project for the AJAX Tiff viewer v1.0. Written in Visual Studio 2010 RC using ASP.net 4.0ASP.Net Routing Configuration: mal.Web.Routing v1.0.0.0: mal.Web.Routing v1.0.0.0ASP.NET Wiki Control: Release 1.0: Includes VS2010 Solution and Project files but targets the 3.5 framework so can still be used with VS2008 if new project files are created.BingPaper: Beta: BingPaper Beta Release This Beta release contains quite a few improvements: This Beta release contains a complate overhaul of the replacement tok...Bolão: teste: testeDotNetNuke® Skin Reasonable: Reasonable Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by Ralph Williams of Architech Solutions. A clean, classy, professi...DotNetNuke® Skin Seasons: Seasons Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by Atul Handa and Kevin McCusker of Mercer. This skin is a generic ...dylan.NET: dylan.NET v. 9.2: In TFS Serverdylan.NET Apps: dylan.NET Apps v. 1.1: First version of dnu.dll together with v.9.2 of dylan.NETFamily Tree Analyzer: Version 1.1.0.0: Version 1.1.0.0 Census report now shows in bold those individuals you KNOW to be alive at the date of the census. Direct Ancestors on census repor...GLB Virtual Player Builder: 0.4.1: Minor change to reset non-major/minor attr to 8.Hulu Launcher: HuluLauncher Release 1.0.1.1: HuluLauncher Release 1.0.1.1 is the initial, barely-tested release of this Windows Media Center add-in. It should work in Vista Media Center and 7 ...Imenik: Imenik: Imenik is now available!jQuery Library for SharePoint Web Services: SPServices 0.5.3: NOTE: While I work on new releases, I post alpha versions. Usually the alpha versions are here to address a particular need. I DO NOT recommend usi...KeelKit: KeelKit 1.0.3800: 更新内容如下: 优化了DBHelper的一些机制 修正一些BUG 支持Mysql PHP代理,使得能通过Web代理的方式远程访问数据库服务器 添加Model实例化方法,支持所有非自动计算字段的参数实例化、支持所有非空字段实例化 添加Model中的常量,使用这些常量可以获得表名称。 添加了自...Managed Extensibility Framework (MEF) Contrib: MefContrib 0.9.0.0: Updated to MEF Preview 9 Moved MefContrib.Extensions.Generics to MefContrib.Hosting.Generics Moved MefContrib.Extensions.Generics.Tests to MefC...MooiNooi MVC2LINQ2SQL Web Databinder: MooiNooi MVC2LINQ2SQL Web Databinder v0.1.1: Repaired a problem with collections... only index number under 10 were allowed... Please send me your comments and rate the project. Sorry.Mouse Gestures for .NET: Mouse Gestures 1.0: Improved version of the component + sample application. Version 1.0 is not backward compatible.MoviesForMyBlog: MoviesForMyBlog V1.0: This is version 1.0Nito.KitchenSink: Version 1: The first release of Nito.KitchenSink, which uses Nito.Linq 0.1. Please report any issues via the Issue Tracker.Nito.LINQ: Beta (v0.1): This is the first official public release of Nito.Linq. This release only supports .NET 3.5 SP1 with the Microsoft Rx libraries. The documentation...Nito.LINQ: Beta (v0.2): Added ListSource.Generate overloads that take a delegate for counting the list elements.OData SDK for PHP: OData SDK for PHP: This is an updated version of the ADO.NET Data Services toolkit for PHP. It includes full support for OData Protocol V2.0 specification, better pro...Orchard Project: Orchard Latest Build (0.1.2010.0312): This is a technical preview release of Orchard. Our intent with this release is to demonstrate representative experiences for our audiences (end-us...patterns & practices – Enterprise Library: Enterprise Library 5.0 - Beta2: This is a preliminary release of the code and documentation that may potentially be incomplete, and may change prior to the final release of Enterp...patterns & practices - Unity: Unity 2.0 - Beta2: This is a preliminary release of the code and documentation that may potentially be incomplete, and may change prior to the final release of Unity ...patterns & practices - Windows Azure Guidance: Code drop - 1: This initial version is the before the cloud baseline application, so you won’t find anything related to Windows Azure here. Next iteration, we'l...Rawr: Rawr 2.3.12: - First, a note about Rawr3. Rawr3 has been in development for quite a while now, and we know that everyone's eager to get it. It's been held back ...ResumeTracker: Resume Tracker v1.0: First release.Selection Maker: Selection Maker 1.0: This is just the first release of this programSevenZipSharp: SevenZipSharp 0.61: Added: Windows Mobile support bool Check() method for Extractor to test archives integrity FileExtractionFinished now returns FileInfoEventArgs...Silverlight 3.0 Advanced ToolTipService: Advanced ToolTipService v2.0.1: This release is compiled against the Silverlight 3.0 runtime. A demonstration on how to set the ToolTip content to a property of the DataContext o...Silverlight Flow Layouts library: SL and WPF Flow Layouts library March 2010: This release indtroduces some bug fixes, performance improvements, Silverlight 4 RC and WPF 4.0 RC support. Flow Layouts Library is a control libra...Simple Phonebook: SimplePhonebook Visual Studio 2010 Solution: Ovo je cijeli projekt u kojem se nalaze svi source fileovi koje sam koristio u izradi ove aplikacije. Za pokretanje je potreban Visual Studio 2010....Simple Phonebook: SimplePhonebook.rar: U ovoj .rar datoteci nalaze se izvršni fileovi. _ In this .rar file you can find .exe file needed for executing the application.SLARToolkit - Silverlight Augmented Reality Toolkit: SLARToolkit 1.0.1.0: Updated to Silverlight 4 Release Candidate. Introduces the new GenericMarkerDetector which uses the IXrgbReader interface. See the Marker Detecto...sPWadmin: pwAdmin v1.0: Fixed: Templates can now be saved server restart persistant (wait at least 60 seconds between saving and restarting)SQL Director for Dependencies & Indexes: SDD CTP 1.0: SQL Director for Dependencies allows you to view dependencies between tables, views, function, stored procedures and jobs. Newest Testing build, f...SqlCeViewer: SeasonStar Database Management 0.7.0.2: Update the user interface to help user understand clearly how to use .UltiLogger: Initial alpha release: Important! This is not a feature-complete release! It contains the logging priorities, and an interface for building logging systems from. THERE IS...Visual Studio 2008 NUnit snippets: Version 1.0: First stable release.Zeta Resource Editor: Source Code Release 2010-03-16: New source code. Binary setup is also available.Most Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitASP.NET Ajax LibraryWindows Presentation Foundation (WPF)ASP.NETLiveUpload to FacebookMost Active ProjectsLINQ to TwitterOData SDK for PHPRawrN2 CMSpatterns & practices – Enterprise LibraryDirectQBlogEngine.NETMapWindow6SharePoint Team-MailerNB_Store - Free DotNetNuke Ecommerce Catalog Module

    Read the article

  • Is your team is a high-performing team?

    As a child I can remember looking out of the car window as my father drove along the Interstate in Florida while seeing prisoners wearing bright orange jump suits and prison guards keeping a watchful eye on them. The prisoners were taking part in a prison road gang. These road gangs were formed to help the state maintain the state highway infrastructure. The prisoner’s primary responsibilities are to pick up trash and debris from the roadway. This is a prime example of a work group or working group used by most prison systems in the United States. Work groups or working groups can be defined as a collection of individuals or entities working together to achieve a specific goal or accomplish a specific set of tasks. Typically these groups are only established for a short period of time and are dissolved once the desired outcome has been achieved. More often than not group members usually feel as though they are expendable to the group and some even dread that they are even in the group. "A team is a small number of people with complementary skills who are committed to a common purpose, performance goals, and approach for which they are mutually accountable." (Katzenbach and Smith, 1993) So how do you determine that a team is a high-performing team?  This can be determined by three base line criteria that include: consistently high quality output, the promotion of personal growth and well being of all team members, and most importantly the ability to learn and grow as a unit. Initially, a team can successfully create high-performing output without meeting all three criteria, however this will erode over time because team members will feel detached from the group or that they are not growing then the quality of the output will decline. High performing teams are similar to work groups because they both utilize a collection of individuals or entities to accomplish tasks. What distinguish a high-performing team from a work group are its characteristics. High-performing teams contain five core characteristics. These characteristics are what separate a group from a team. The five characteristics of a high-performing team include: Purpose, Performance Measures, People with Tasks and Relationship Skills, Process, and Preparation and Practice. A high-performing team is much more than a work group, and typically has a life cycle that can vary from team to team. The standard team lifecycle consists of five states and is comparable to a human life cycle. The five states of a high-performing team lifecycle include: Formulating, Storming, Normalizing, Performing, and Adjourning. The Formulating State of a team is first realized when the team members are first defined and roles are assigned to all members. This initial stage is very important because it can set the tone for the team and can ultimately determine its success or failure. In addition, this stage requires the team to have a strong leader because team members are normally unclear about specific roles, specific obstacles and goals that my lay ahead of them.  Finally, this stage is where most team members initially meet one another prior to working as a team unless the team members already know each other. The Storming State normally arrives directly after the formulation of a new team because there are still a lot of unknowns amongst the newly formed assembly. As a general rule most of the parties involved in the team are still getting used to the workload, pace of work, deadlines and the validity of various tasks that need to be performed by the group.  In this state everything is questioned because there are so many unknowns. Items commonly questioned include the credentials of others on the team, the actual validity of a project, and the leadership abilities of the team leader.  This can be exemplified by looking at the interactions between animals when they first meet.  If we look at a scenario where two people are walking directly toward each other with their dogs. The dogs will automatically enter the Storming State because they do not know the other dog. Typically in this situation, they attempt to define which is more dominating via play or fighting depending on how the dogs interact with each other. Once dominance has been defined and accepted by both dogs then they will either want to play or leave depending on how the dogs interacted and other environmental variables. Once the Storming State has been realized then the Normalizing State takes over. This state is entered by a team once all the questions of the Storming State have been answered and the team has been tested by a few tasks or projects.  Typically, participants in the team are filled with energy, and comradery, and a strong alliance with team goals and objectives.  A high school football team is a perfect example of the Normalizing State when they start their season.  The player positions have been assigned, the depth chart has been filled and everyone is focused on winning each game. All of the players encourage and expect each other to perform at the best of their abilities and are united by competition from other teams. The Performing State is achieved by a team when its history, working habits, and culture solidify the team as one working unit. In this state team members can anticipate specific behaviors, attitudes, reactions, and challenges are seen as opportunities and not problems. Additionally, each team member knows their role in the team’s success, and the roles of others. This is the most productive state of a group and is where all the time invested working together really pays off. If you look at an Olympic figure skating team skate you can easily see how the time spent working together benefits their performance. They skate as one unit even though it is comprised of two skaters. Each skater has their routine completely memorized as well as their partners. This allows them to anticipate each other’s moves on the ice makes their skating look effortless. The final state of a team is the Adjourning State. This state is where accomplishments by the team and each individual team member are recognized. Additionally, this state also allows for reflection of the interactions between team members, work accomplished and challenges that were faced. Finally, the team celebrates the challenges they have faced and overcome as a unit. Currently in the workplace teams are divided into two different types: Co-located and Distributed Teams. Co-located teams defined as the traditional group of people working together in an office, according to Andy Singleton of Assembla. This traditional type of a team has dominated business in the past due to inadequate technology, which forced workers to primarily interact with one another via face to face meetings.  Team meetings are primarily lead by the person with the highest status in the company. Having personally, participated in meetings of this type, usually a select few of the team members dominate the flow of communication which reduces the input of others in group discussions. Since discussions are dominated by a select few individuals the discussions and group discussion are skewed in favor of the individuals who communicate the most in meetings. In addition, Team members might not give their full opinions on a topic of discussion in part not to offend or create controversy amongst the team and can alter decision made in meetings towards those of the opinions of the dominating team members. Distributed teams are by definition spread across an area or subdivided into separate sections. That is exactly what distributed teams when compared to a more traditional team. It is common place for distributed teams to have team members across town, in the next state, across the country and even with the advances in technology over the last 20 year across the world. These teams allow for more diversity compared to the other type of teams because they allow for more flexibility regarding location. A team could consist of a 30 year old male Italian project manager from New York, a 50 year old female Hispanic from California and a collection of programmers from India because technology allows them to communicate as if they were standing next to one another.  In addition, distributed team members consult with more team members prior to making decisions compared to traditional teams, and take longer to come to decisions due to the changes in time zones and cultural events. However, team members feel more empowered to speak out when they do not agree with the team and to notify others of potential issues regarding the work that the team is doing. Virtual teams which are a subset of the distributed team type is changing organizational strategies due to the fact that a team can now in essence be working 24 hrs a day because of utilizing employees in various time zones and locations.  A primary example of this is with customer services departments, a company can have multiple call centers spread across multiple time zones allowing them to appear to be open 24 hours a day while all a employees work from 9AM to 5 PM every day. Virtual teams also allow human resources departments to go after the best talent for the company regardless of where the potential employee works because they will be a part of a virtual team all that is need is the proper technology to be setup to allow everyone to communicate. In addition to allowing employees to work from home, the company can save space and resources by not having to provide a desk for every team member. In fact, those team members that randomly come into the office can actually share one desk amongst multiple people. This is definitely a cost cutting plus given the current state of the economy. One thing that can turn a team into a high-performing team is leadership. High-performing team leaders need to focus on investing in ongoing personal development, provide team members with direction, structure, and resources needed to accomplish their work, make the right interventions at the right time, and help the team manage boundaries between the team and various external parties involved in the teams work. A team leader needs to invest in ongoing personal development in order to effectively manage their team. People have said that attitude is everything; this is very true about leaders and leadership. A team takes on the attitudes and behaviors of its leaders. This can potentially harm the team and the team’s output. Leaders must concentrate on self-awareness, and understanding their team’s group dynamics to fully understand how to lead them. In addition, always learning new leadership techniques from other effective leaders is also very beneficial. Providing team members with direction, structure, and resources that they need to accomplish their work collectively sounds easy, but it is not.  Leaders need to be able to effectively communicate with their team on how their work helps the company reach for its organizational vision. Conversely, the leader needs to allow his team to work autonomously within specific guidelines to turn the company’s vision into a reality.  This being said the team must be appropriately staffed according to the size of the team’s tasks and their complexity. These tasks should be clear, and be meaningful to the company’s objectives and allow for feedback to be exchanged with the leader and the team member and the leader and upper management. Now if the team is properly staffed, and has a clear and full understanding of what is to be done; the company also must supply the workers with the proper tools to achieve the tasks that they are asked to do. No one should be asked to dig a hole without being given a shovel.  Finally, leaders must reward their team members for accomplishments that they achieve. Awards could range from just a simple congratulatory email, a party to close the completion of a large project, or other monetary rewards. Managing boundaries is very important for team leaders because it can alter attitudes of team members and can add undue stress to the team which will force them to loose focus on the tasks at hand for the group. Team leaders should promote communication between team members so that burdens are shared amongst the team and solutions can be derived from hearing the opinions of multiple sources. This also reinforces team camaraderie and working as a unit. Team leaders must manage the type and timing of interventions as to not create an even bigger mess within the team. Poorly timed interventions can really deflate team members and make them question themselves. This could really increase further and undue interventions by the team leader. Typically, the best time for interventions is when the team is just starting to form so that all unproductive behaviors are removed from the team and that it can retain focus on its agenda. If an intervention is effectively executed the team will feel energized about the work that they are doing, promote communication and interaction amongst the group and improve moral overall. High-performing teams are very import to organizations because they consistently produce high quality output and develop a collective purpose for their work. This drive to succeed allows team members to utilize specific talents allowing for growth in these areas.  In addition, these team members usually take on a sense of ownership with their projects and feel that the other team members are irreplaceable. References: http://blog.assembla.com/assemblablog/tabid/12618/bid/3127/Three-ways-to-organize-your-team-co-located-outsourced-or-global.aspx Katzenbach, J.R. & Smith, D.K. (1993). The Wisdom of Teams: Creating the High-performance Organization. Boston: Harvard Business School.

    Read the article

  • Built-in GZip/Deflate Compression on IIS 7.x

    - by Rick Strahl
    IIS 7 improves internal compression functionality dramatically making it much easier than previous versions to take advantage of compression that’s built-in to the Web server. IIS 7 also supports dynamic compression which allows automatic compression of content created in your own applications (ASP.NET or otherwise!). The scheme is based on content-type sniffing and so it works with any kind of Web application framework. While static compression on IIS 7 is super easy to set up and turned on by default for most text content (text/*, which includes HTML and CSS, as well as for JavaScript, Atom, XAML, XML), setting up dynamic compression is a bit more involved, mostly because the various default compression settings are set in multiple places down the IIS –> ASP.NET hierarchy. Let’s take a look at each of the two approaches available: Static Compression Compresses static content from the hard disk. IIS can cache this content by compressing the file once and storing the compressed file on disk and serving the compressed alias whenever static content is requested and it hasn’t changed. The overhead for this is minimal and should be aggressively enabled. Dynamic Compression Works against application generated output from applications like your ASP.NET apps. Unlike static content, dynamic content must be compressed every time a page that requests it regenerates its content. As such dynamic compression has a much bigger impact than static caching. How Compression is configured Compression in IIS 7.x  is configured with two .config file elements in the <system.WebServer> space. The elements can be set anywhere in the IIS/ASP.NET configuration pipeline all the way from ApplicationHost.config down to the local web.config file. The following is from the the default setting in ApplicationHost.config (in the %windir%\System32\inetsrv\config forlder) on IIS 7.5 with a couple of small adjustments (added json output and enabled dynamic compression): <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> <urlCompression doStaticCompression="true" doDynamicCompression="true" /> </system.webServer> </configuration> You can find documentation on the httpCompression and urlCompression keys here respectively: http://msdn.microsoft.com/en-us/library/ms690689%28v=vs.90%29.aspx http://msdn.microsoft.com/en-us/library/aa347437%28v=vs.90%29.aspx The httpCompression Element – What and How to compress Basically httpCompression configures what types to compress and how to compress them. It specifies the DLL that handles gzip encoding and the types of documents that are to be compressed. Types are set up based on mime-types which looks at returned Content-Type headers in HTTP responses. For example, I added the application/json to mime type to my dynamic compression types above to allow that content to be compressed as well since I have quite a bit of AJAX content that gets sent to the client. The UrlCompression Element – Enables and Disables Compression The urlCompression element is a quick way to turn compression on and off. By default static compression is enabled server wide, and dynamic compression is disabled server wide. This might be a bit confusing because the httpCompression element also has a doDynamicCompression attribute which is set to true by default, but the urlCompression attribute by the same name actually overrides it. The urlCompression element only has three attributes: doStaticCompression, doDynamicCompression and dynamicCompressionBeforeCache. The doCompression attributes are the final determining factor whether compression is enabled, so it’s a good idea to be explcit! The default for doDynamicCompression='false”, but doStaticCompression="true"! Static Compression is enabled by Default, Dynamic Compression is not Because static compression is very efficient in IIS 7 it’s enabled by default server wide and there probably is no reason to ever change that setting. Dynamic compression however, since it’s more resource intensive, is turned off by default. If you want to enable dynamic compression there are a few quirks you have to deal with, namely that enabling it in ApplicationHost.config doesn’t work. Setting: <urlCompression doDynamicCompression="true" /> in applicationhost.config appears to have no effect and I had to move this element into my local web.config to make dynamic compression work. This is actually a smart choice because you’re not likely to want dynamic compression in every application on a server. Rather dynamic compression should be applied selectively where it makes sense. However, nowhere is it documented that the setting in applicationhost.config doesn’t work (or more likely is overridden somewhere and disabled lower in the configuration hierarchy). So: remember to set doDynamicCompression=”true” in web.config!!! How Static Compression works Static compression works against static content loaded from files on disk. Because this content is static and not bound to change frequently – such as .js, .css and static HTML content – it’s fairly easy for IIS to compress and then cache the compressed content. The way this works is that IIS compresses the files into a special folder on the server’s hard disk and then reads the content from this location if already compressed content is requested and the underlying file resource has not changed. The semantics of serving an already compressed file are very efficient – IIS still checks for file changes, but otherwise just serves the already compressed file from the compression folder. The compression folder is located at: %windir%\inetpub\temp\IIS Temporary Compressed Files\ApplicationPool\ If you look into the subfolders you’ll find compressed files: These files are pre-compressed and IIS serves them directly to the client until the underlying files are changed. As I mentioned before – static compression is on by default and there’s very little reason to turn that functionality off as it is efficient and just works out of the box. The one tweak you might want to do is to set the compression level to maximum. Since IIS only compresses content very infrequently it would make sense to apply maximum compression. You can do this with the staticCompressionLevel setting on the scheme element: <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> Other than that the default settings are probably just fine. Dynamic Compression – not so fast! By default dynamic compression is disabled and that’s actually quite sensible – you should use dynamic compression very carefully and think about what content you want to compress. In most applications it wouldn’t make sense to compress *all* generated content as it would generate a significant amount of overhead. Scott Fortsyth has a great post that details some of the performance numbers and how much impact dynamic compression has. Depending on how busy your server is you can play around with compression and see what impact it has on your server’s performance. There are also a few settings you can tweak to minimize the overhead of dynamic compression. Specifically the httpCompression key has a couple of CPU related keys that can help minimize the impact of Dynamic Compression on a busy server: dynamicCompressionDisableCpuUsage dynamicCompressionEnableCpuUsage By default these are set to 90 and 50 which means that when the CPU hits 90% compression will be disabled until CPU utilization drops back down to 50%. Again this is actually quite sensible as it utilizes CPU power from compression when available and falling off when the threshold has been hit. It’s a good way some of that extra CPU power on your big servers to use when utilization is low. Again these settings are something you likely have to play with. I would probably set the upper limit a little lower than 90% maybe around 70% to make this a feature that kicks in only if there’s lots of power to spare. I’m not really sure how accurate these CPU readings that IIS uses are as Cpu usage on Web Servers can spike drastically even during low loads. Don’t trust settings – do some load testing or monitor your server in a live environment to see what values make sense for your environment. Finally for dynamic compression I tend to add one Mime type for JSON data, since a lot of my applications send large chunks of JSON data over the wire. You can do that with the application/json content type: <add mimeType="application/json" enabled="true" /> What about Deflate Compression? The default compression is GZip. The documentation hints that you can use a different compression scheme and mentions Deflate compression. And sure enough you can change the compression settings to: <scheme name="deflate" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> to get deflate style compression. The deflate algorithm produces slightly more compact output so I tend to prefer it over GZip but more HTTP clients (other than browsers) support GZip than Deflate so be careful with this option if you build Web APIs. I also had some issues with the above value actually being applied right away. Changing the scheme in applicationhost.config didn’t show up on the site  right away. It required me to do a full IISReset to get that change to show up before I saw the change over to deflate compressed content. Content was slightly more compressed with deflate – not sure if it’s worth the slightly less common compression type, but the option at least is available. IIS 7 finally makes GZip Easy In summary IIS 7 makes GZip easy finally, even if the configuration settings are a bit obtuse and the documentation is seriously lacking. But once you know the basic settings I’ve described here and the fact that you can override all of this in your local web.config it’s pretty straight forward to configure GZip support and tweak it exactly to your needs. Static compression is a total no brainer as it adds very little overhead compared to direct static file serving and provides solid compression. Dynamic Compression is a little more tricky as it does add some overhead to servers, so it probably will require some tweaking to get the right balance of CPU load vs. compression ratios. Looking at large sites like Amazon, Yahoo, NewEgg etc. – they all use Related Content Code based ASP.NET GZip Caveats HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • Passing multiple POST parameters to Web API Controller Methods

    - by Rick Strahl
    ASP.NET Web API introduces a new API for creating REST APIs and making AJAX callbacks to the server. This new API provides a host of new great functionality that unifies many of the features of many of the various AJAX/REST APIs that Microsoft created before it - ASP.NET AJAX, WCF REST specifically - and combines them into a whole more consistent API. Web API addresses many of the concerns that developers had with these older APIs, namely that it was very difficult to build consistent REST style resource APIs easily. While Web API provides many new features and makes many scenarios much easier, a lot of the focus has been on making it easier to build REST compliant APIs that are focused on resource based solutions and HTTP verbs. But  RPC style calls that are common with AJAX callbacks in Web applications, have gotten a lot less focus and there are a few scenarios that are not that obvious, especially if you're expecting Web API to provide functionality similar to ASP.NET AJAX style AJAX callbacks. RPC vs. 'Proper' REST RPC style HTTP calls mimic calling a method with parameters and returning a result. Rather than mapping explicit server side resources or 'nouns' RPC calls tend simply map a server side operation, passing in parameters and receiving a typed result where parameters and result values are marshaled over HTTP. Typically RPC calls - like SOAP calls - tend to always be POST operations rather than following HTTP conventions and using the GET/POST/PUT/DELETE etc. verbs to implicitly determine what operation needs to be fired. RPC might not be considered 'cool' anymore, but for typical private AJAX backend operations of a Web site I'd wager that a large percentage of use cases of Web API will fall towards RPC style calls rather than 'proper' REST style APIs. Web applications that have needs for things like live validation against data, filling data based on user inputs, handling small UI updates often don't lend themselves very well to limited HTTP verb usage. It might not be what the cool kids do, but I don't see RPC calls getting replaced by proper REST APIs any time soon.  Proper REST has its place - for 'real' API scenarios that manage and publish/share resources, but for more transactional operations RPC seems a better choice and much easier to implement than trying to shoehorn a boatload of endpoint methods into a few HTTP verbs. In any case Web API does a good job of providing both RPC abstraction as well as the HTTP Verb/REST abstraction. RPC works well out of the box, but there are some differences especially if you're coming from ASP.NET AJAX service or WCF Rest when it comes to multiple parameters. Action Routing for RPC Style Calls If you've looked at Web API demos you've probably seen a bunch of examples of how to create HTTP Verb based routing endpoints. Verb based routing essentially maps a controller and then uses HTTP verbs to map the methods that are called in response to HTTP requests. This works great for resource APIs but doesn't work so well when you have many operational methods in a single controller. HTTP Verb routing is limited to the few HTTP verbs available (plus separate method signatures) and - worse than that - you can't easily extend the controller with custom routes or action routing beyond that. Thankfully Web API also supports Action based routing which allows you create RPC style endpoints fairly easily:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi", action = "GetAblums" } ); This uses traditional MVC style {action} method routing which is different from the HTTP verb based routing you might have read a bunch about in conjunction with Web API. Action based routing like above lets you specify an end point method in a Web API controller either via the {action} parameter in the route string or via a default value for custom routes. Using routing you can pass multiple parameters either on the route itself or pass parameters on the query string, via ModelBinding or content value binding. For most common scenarios this actually works very well. As long as you are passing either a single complex type via a POST operation, or multiple simple types via query string or POST buffer, there's no issue. But if you need to pass multiple parameters as was easily done with WCF REST or ASP.NET AJAX things are not so obvious. Web API has no issue allowing for single parameter like this:[HttpPost] public string PostAlbum(Album album) { return String.Format("{0} {1:d}", album.AlbumName, album.Entered); } There are actually two ways to call this endpoint: albums/PostAlbum Using the Model Binder with plain POST values In this mechanism you're sending plain urlencoded POST values to the server which the ModelBinder then maps the parameter. Each property value is matched to each matching POST value. This works similar to the way that MVC's  ModelBinder works. Here's how you can POST using the ModelBinder and jQuery:$.ajax( { url: "albums/PostAlbum", type: "POST", data: { AlbumName: "Dirty Deeds", Entered: "5/1/2012" }, success: function (result) { alert(result); }, error: function (xhr, status, p3, p4) { var err = "Error " + " " + status + " " + p3; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); Here's what the POST data looks like for this request: The model binder and it's straight form based POST mechanism is great for posting data directly from HTML pages to model objects. It avoids having to do manual conversions for many operations and is a great boon for AJAX callback requests. Using Web API JSON Formatter The other option is to post data using a JSON string. The process for this is similar except that you create a JavaScript object and serialize it to JSON first.album = { AlbumName: "PowerAge", Entered: new Date(1977,0,1) } $.ajax( { url: "albums/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify(album), success: function (result) { alert(result); } }); Here the data is sent using a JSON object rather than form data and the data is JSON encoded over the wire. The trace reveals that the data is sent using plain JSON (Source above), which is a little more efficient since there's no UrlEncoding that occurs. BTW, notice that WebAPI automatically deals with the date. I provided the date as a plain string, rather than a JavaScript date value and the Formatter and ModelBinder both automatically map the date propertly to the Entered DateTime property of the Album object. Passing multiple Parameters to a Web API Controller Single parameters work fine in either of these RPC scenarios and that's to be expected. ModelBinding always works against a single object because it maps a model. But what happens when you want to pass multiple parameters? Consider an API Controller method that has a signature like the following:[HttpPost] public string PostAlbum(Album album, string userToken) Here I'm asking to pass two objects to an RPC method. Is that possible? This used to be fairly straight forward either with WCF REST and ASP.NET AJAX ASMX services, but as far as I can tell this is not directly possible using a POST operation with WebAPI. There a few workarounds that you can use to make this work: Use both POST *and* QueryString Parameters in Conjunction If you have both complex and simple parameters, you can pass simple parameters on the query string. The above would actually work with: /album/PostAlbum?userToken=sekkritt but that's not always possible. In this example it might not be a good idea to pass a user token on the query string though. It also won't work if you need to pass multiple complex objects, since query string values do not support complex type mapping. They only work with simple types. Use a single Object that wraps the two Parameters If you go by service based architecture guidelines every service method should always pass and return a single value only. The input should wrap potentially multiple input parameters and the output should convey status as well as provide the result value. You typically have a xxxRequest and a xxxResponse class that wraps the inputs and outputs. Here's what this method might look like:public PostAlbumResponse PostAlbum(PostAlbumRequest request) { var album = request.Album; var userToken = request.UserToken; return new PostAlbumResponse() { IsSuccess = true, Result = String.Format("{0} {1:d} {2}", album.AlbumName, album.Entered,userToken) }; } with these support types:public class PostAlbumRequest { public Album Album { get; set; } public User User { get; set; } public string UserToken { get; set; } } public class PostAlbumResponse { public string Result { get; set; } public bool IsSuccess { get; set; } public string ErrorMessage { get; set; } }   To call this method you now have to assemble these objects on the client and send it up as JSON:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result.Result); } }); I assemble the individual types first and then combine them in the data: property of the $.ajax() call into the actual object passed to the server, that mimics the structure of PostAlbumRequest server class that has Album, User and UserToken properties. This works well enough but it gets tedious if you have to create Request and Response types for each method signature. If you have common parameters that are always passed (like you always pass an album or usertoken) you might be able to abstract this to use a single object that gets reused for all methods, but this gets confusing too: Overload a single 'parameter' too much and it becomes a nightmare to decipher what your method actual can use. Use JObject to parse multiple Property Values out of an Object If you recall, ASP.NET AJAX and WCF REST used a 'wrapper' object to make default AJAX calls. Rather than directly calling a service you always passed an object which contained properties for each parameter: { parm1: Value, parm2: Value2 } WCF REST/ASP.NET AJAX would then parse this top level property values and map them to the parameters of the endpoint method. This automatic type wrapping functionality is no longer available directly in Web API, but since Web API now uses JSON.NET for it's JSON serializer you can actually simulate that behavior with a little extra code. You can use the JObject class to receive a dynamic JSON result and then using the dynamic cast of JObject to walk through the child objects and even parse them into strongly typed objects. Here's how to do this on the API Controller end:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } This is clearly not as nice as having the parameters passed directly, but it works to allow you to pass multiple parameters and access them using Web API. JObject is JSON.NET's generic object container which sports a nice dynamic interface that allows you to walk through the object's properties using standard 'dot' object syntax. All you have to do is cast the object to dynamic to get access to the property interface of the JSON type. Additionally JObject also allows you to parse JObject instances into strongly typed objects, which enables us here to retrieve the two objects passed as parameters from this jquery code:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result); } }); Summary ASP.NET Web API brings many new features and many advantages over the older Microsoft AJAX and REST APIs, but realize that some things like passing multiple strongly typed object parameters will work a bit differently. It's not insurmountable, but just knowing what options are available to simulate this behavior is good to know. Now let me say here that it's probably not a good practice to pass a bunch of parameters to an API call. Ideally APIs should be closely factored to accept single parameters or a single content parameter at least along with some identifier parameters that can be passed on the querystring. But saying that doesn't mean that occasionally you don't run into a situation where you have the need to pass several objects to the server and all three of the options I mentioned might have merit in different situations. For now I'm sure the question of how to pass multiple parameters will come up quite a bit from people migrating WCF REST or ASP.NET AJAX code to Web API. At least there are options available to make it work.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Setting up a new Silverlight 4 Project with WCF RIA Services

    - by Kevin Grossnicklaus
    Many of my clients are actively using Silverlight 4 and RIA Services to build powerful line of business applications.  Getting things set up correctly is critical to being to being able to take full advantage of the RIA services plumbing and when developers struggle with the setup they tend to shy away from the solution as a whole.  I’m a big proponent of RIA services and wanted to take the opportunity to share some of my experiences in setting up these types of projects.  In late 2010 I presented a RIA Services Master Class here in St. Louis, MO through my firm (ArchitectNow) and the information shared in this post was promised during that presentation. One other thing I want to mention before diving in is the existence of a number of other great posts on this subject.  I’ve learned a lot from many of them and wanted to call out a few of them.  The purpose of my post is to point out some of the gotchas that people get caught up on in the process but I would still encourage you to do as much additional research as you can to find the perfect setup for your needs. Here are a few additional blog posts and articles you should check out on the subject: http://msdn.microsoft.com/en-us/library/ee707351(VS.91).aspx http://adam-thompson.com/post/2010/07/03/Getting-Started-with-WCF-RIA-Services-for-Silverlight-4.aspx Technologies I don’t intend for this post to turn into a full WCF RIA Services tutorial but I did want to point out what technologies we will be using: Visual Studio.NET 2010 Silverlight 4.0 WCF RIA Services for Visual Studio 2010 Entity Framework 4.0 I also wanted to point out that the screenshots came from my personal development box which has a number of additional plug-ins and frameworks loaded so a few of the screenshots might not match 100% with what you see on your own machines. If you do not have Visual Studio 2010 you can download the express version from http://www.microsoft.com/express.  The Silverlight 4.0 tools and the WCF RIA Services components are installed via the Web Platform Installer (http://www.microsoft.com/web/download). Also, the examples given in this post are done in C#…sorry to you VB folks but the concepts are 100% identical. Setting up anew RIA Services Project This section will provide a step-by-step walkthrough of setting up a new RIA services project using a shared DLL for server side code and a simple Entity Framework model for data access.  All projects are created with the consistent ArchitectNow.RIAServices filename prefix and default namespace.  This would be modified to match your companies standards. First, open Visual Studio and open the new project window via File->New->Project.  In the New Project window, select the Silverlight folder in the Installed Templates section on the left and select “Silverlight Application” as your project type.  Verify your solution name and location are set appropriately.  Note that the project name we specified in the example below ends with .Client.  This indicates the name which will be given to our Silverlight project. I consider Silverlight a client-side technology and thus use this name to reflect that.  Click Ok to continue. During the creation on a new Silverlight 4 project you will be prompted with the following dialog to create a new web ASP.NET web project to host your Silverlight content.  As we are demonstrating the setup of a WCF RIA Services infrastructure, make sure the “Enable WCF RIA Services” option is checked and click OK.  Obviously, there are some other options here which have an effect on your solution and you are welcome to look around.  For our example we are going to leave the ASP.NET Web Application Project selected.  If you are interested in having your Silverlight project hosted in an MVC 2 application or a Web Site project these options are available as well.  Also, whichever web project type you select, the name can be modified here as well.  Note that it defaults to the same name as your Silverlight project with the addition of a .Web suffix. At this point, your full Silverlight 4 project and host ASP.NET Web Application should be created and will now display in your Visual Studio solution explorer as part of a single Visual Studio solution as follows: Now we want to add our WCF RIA Services projects to this same solution.  To do so, right-click on the Solution node in the solution explorer and select Add->New Project.  In the New Project dialog again select the Silverlight folder under the Visual C# node on the left and, in the main area of the screen, select the WCF RIA Services Class Library project template as shown below.  Make sure your project name is set appropriately as well.  For the sample below, we will name the project “ArchitectNow.RIAServices.Server.Entities”.   The .Server.Entities suffix we use is meant to simply indicate that this particular project will contain our WCF RIA Services entity classes (as you will see below).  Click OK to continue. Once you have created the WCF RIA Services Class Library specified above, Visual Studio will automatically add TWO projects to your solution.  The first will be an project called .Server.Entities (using our naming conventions) and the other will have the same name with a .Web extension.  The full solution (with all 4 projects) is shown in the image below.  The .Entities project will essentially remain empty and is actually a Silverlight 4 class library that will contain generated RIA Services domain objects.  It will be referenced by our front-end Silverlight project and thus allow for simplified sharing of code between the client and the server.   The .Entities.Web project is a .NET 4.0 class library into which we will put our data access code (via Entity Framework).  This is our server side code and business logic and the RIA Services plumbing will maintain a link between this project and the front end.  Specific entities such as our domain objects and other code we set to be shared will be copied automatically into the .Entities project to be used in both the front end and the back end. At this point, we want to do a little cleanup of the projects in our solution and we will do so by deleting the “Class1.cs” class from both the .Entities project and the .Entities.Web project.  (Has anyone ever intentionally named a class “Class1”?) Next, we need to configure a few references to make RIA Services work.  THIS IS A KEY STEP THAT CAUSES MANY HEADACHES FOR DEVELOPERS NEW TO THIS INFRASTRUCTURE! Using the Add References dialog in Visual Studio, add a project reference from the *.Client project (our Silverlight 4 client) to the *.Entities project (our RIA Services class library).  Next, again using the Add References dialog in Visual Studio, add a project reference from the *.Client.Web project (our ASP.NET host project) to the *.Entities.Web project (our back-end data services DLL).  To get to the Add References dialog, simply right-click on the project you with to add a reference to in the Visual Studio solution explorer and select “Add Reference” from the resulting context menu.  You will want to make sure these references are added as “Project” references to simplify your future debugging.  To reiterate the reference direction using the project names we have utilized in this example thus far:  .Client references .Entities and .Client.Web reference .Entities.Web.  If you have opted for a different naming convention, then the Silverlight project must reference the RIA Services Silverlight class library and the ASP.NET host project must reference the server-side class library. Next, we are going to add a new Entity Framework data model to our data services project (.Entities.Web).  We will do this by right clicking on this project (ArchitectNow.Server.Entities.Web in the above diagram) and selecting Add->New Project.  In the New Project dialog we will select ADO.NET Entity Data Model as in the following diagram.  For now we will call this simply SampleDataModel.edmx and click OK. It is worth pointing out that WCF RIA Services is in no way tied to the Entity Framework as a means of accessing data and any data access technology is supported (as long as the server side implementation maps to the RIA Services pattern which is a topic beyond the scope of this post).  We are using EF to quickly demonstrate the RIA Services concepts and setup infrastructure, as such, I am not providing a database schema with this post but am instead connecting to a small sample database on my local machine.  The following diagram shows a simple EF Data Model with two tables that I reverse engineered from a local data store.   If you are putting together your own solution, feel free to reverse engineer a few tables from any local database to which you have access. At this point, once you have an EF data model generated as an EDMX into your .Entites.Web project YOU MUST BUILD YOUR SOLUTION.  I know it seems strange to call that out but it important that the solution be built at this point for the next step to be successful.  Obviously, if you have any build errors, these must be addressed at this point. At this point we will add a RIA Services Domain Service to our .Entities.Web project (our server side code).  We will need to right-click on the .Entities.Web project and select Add->New Item.  In the Add New Item dialog, select Domain Service Class and verify the name of your new Domain Service is correct (ours is called SampleService.cs in the image below).  Next, click "Add”. After clicking “Add” to include the Domain Service Class in the selected project, you will be presented with the following dialog.  In it, you can choose which entities from the selected EDMX to include in your services and if they should be allowed to be edited (i.e. inserted, updated, or deleted) via this service.  If the “Available DataContext/ObjectContext classes” dropdown is empty, this indicates you have not yes successfully built your project after adding your EDMX.  I would also recommend verifying that the “Generate associated classes for metadata” option is selected.  Once you have selected the appropriate options, click “OK”. Once you have added the domain service class to the .Entities.Web project, the resulting solution should look similar to the following: Note that in the solution you now have a SampleDataModel.edmx which represents your EF data mapping to your database and a SampleService.cs which will contain a large amount of generated RIA Services code which RIA Services utilizes to access this data from the Silverlight front-end.  You will put all your server side data access code and logic into the SampleService.cs class.  The SampleService.metadata.cs class is for decorating the generated domain objects with attributes from the System.ComponentModel.DataAnnotations namespace for validation purposes. FINAL AND KEY CONFIGURATION STEP!  One key step that causes significant headache to developers configuring RIA Services for the first time is the fact that, when we added the EDMX to the .Entities.Web project for our EF data access, a connection string was generated and placed within a newly generated App.Context file within that project.  While we didn’t point it out at the time you can see it in the image above.  This connection string will be required for the EF data model to successfully locate it’s data.  Also, when we added the Domain Service class to the .Entities.Web project, a number of RIA Services configuration options were added to the same App.Config file.   Unfortunately, when we ultimately begin to utilize the RIA Services infrastructure, our Silverlight UI will be making RIA services calls through the ASP.NET host project (i.e. .Client.Web).  This host project has a reference to the .Entities.Web project which actually contains the code so all will pass through correctly EXCEPT the fact that the host project will utilize it’s own Web.Config for any configuration settings.  For this reason we must now merge all the sections of the App.Config file in the .Entities.Web project into the Web.Config file in the .Client.Web project.  I know this is a bit tedious and I wish there were a simpler solution but it is required for our RIA Services Domain Service to be made available to the front end Silverlight project.  Much of this manual merge can be achieved by simply cutting and pasting from App.Config into Web.Config.  Unfortunately, the <system.webServer> section will exist in both and the contents of this section will need to be manually merged.  Fortunately, this is a step that needs to be taken only once per solution.  As you add additional data structures and Domain Services methods to the server no additional changes will be necessary to the Web.Config. Next Steps At this point, we have walked through the basic setup of a simple RIA services solution.  Unfortunately, there is still a lot to know about RIA services and we have not even begun to take advantage of the plumbing which we just configured (meaning we haven’t even made a single RIA services call).  I plan on posting a few more introductory posts over the next few weeks to take us to this step.  If you have any questions on the content in this post feel free to reach out to me via this Blog and I’ll gladly point you in (hopefully) the right direction. Resources Prior to closing out this post, I wanted to share a number or resources to help you get started with RIA services.  While I plan on posting more on the subject, I didn’t invent any of this stuff and wanted to give credit to the following areas for helping me put a lot of these pieces into place.   The books and online resources below will go a long way to making you extremely productive with RIA services in the shortest time possible.  The only thing required of you is the dedication to take advantage of the resources available. Books Pro Business Applications with Silverlight 4 http://www.amazon.com/Pro-Business-Applications-Silverlight-4/dp/1430272074/ref=sr_1_2?ie=UTF8&qid=1291048751&sr=8-2 Silverlight 4 in Action http://www.amazon.com/Silverlight-4-Action-Pete-Brown/dp/1935182374/ref=sr_1_1?ie=UTF8&qid=1291048751&sr=8-1 Pro Silverlight for the Enterprise (Books for Professionals by Professionals) http://www.amazon.com/Pro-Silverlight-Enterprise-Books-Professionals/dp/1430218673/ref=sr_1_3?ie=UTF8&qid=1291048751&sr=8-3 Web Content RIA Services http://channel9.msdn.com/Blogs/RobBagby/NET-RIA-Services-in-5-Minutes http://silverlight.net/riaservices/ http://www.silverlight.net/learn/videos/all/net-ria-services-intro/ http://www.silverlight.net/learn/videos/all/ria-services-support-visual-studio-2010/ http://channel9.msdn.com/learn/courses/Silverlight4/SL4BusinessModule2/SL4LOB_02_01_RIAServices http://www.myvbprof.com/MainSite/index.aspx#/zSL4_RIA_01 http://channel9.msdn.com/blogs/egibson/silverlight-firestarter-ria-services http://msdn.microsoft.com/en-us/library/ee707336%28v=VS.91%29.aspx Silverlight www.silverlight.net http://msdn.microsoft.com/en-us/silverlight4trainingcourse.aspx http://channel9.msdn.com/shows/silverlighttv

    Read the article

  • C#/.NET Little Wonders: The Concurrent Collections (1 of 3)

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In the next few weeks, we will discuss the concurrent collections and how they have changed the face of concurrent programming. This week’s post will begin with a general introduction and discuss the ConcurrentStack<T> and ConcurrentQueue<T>.  Then in the following post we’ll discuss the ConcurrentDictionary<T> and ConcurrentBag<T>.  Finally, we shall close on the third post with a discussion of the BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. A brief history of collections In the beginning was the .NET 1.0 Framework.  And out of this framework emerged the System.Collections namespace, and it was good.  It contained all the basic things a growing programming language needs like the ArrayList and Hashtable collections.  The main problem, of course, with these original collections is that they held items of type object which means you had to be disciplined enough to use them correctly or you could end up with runtime errors if you got an object of a type you weren't expecting. Then came .NET 2.0 and generics and our world changed forever!  With generics the C# language finally got an equivalent of the very powerful C++ templates.  As such, the System.Collections.Generic was born and we got type-safe versions of all are favorite collections.  The List<T> succeeded the ArrayList and the Dictionary<TKey,TValue> succeeded the Hashtable and so on.  The new versions of the library were not only safer because they checked types at compile-time, in many cases they were more performant as well.  So much so that it's Microsoft's recommendation that the System.Collections original collections only be used for backwards compatibility. So we as developers came to know and love the generic collections and took them into our hearts and embraced them.  The problem is, thread safety in both the original collections and the generic collections can be problematic, for very different reasons. Now, if you are only doing single-threaded development you may not care – after all, no locking is required.  Even if you do have multiple threads, if a collection is “load-once, read-many” you don’t need to do anything to protect that container from multi-threaded access, as illustrated below: 1: public static class OrderTypeTranslator 2: { 3: // because this dictionary is loaded once before it is ever accessed, we don't need to synchronize 4: // multi-threaded read access 5: private static readonly Dictionary<string, char> _translator = new Dictionary<string, char> 6: { 7: {"New", 'N'}, 8: {"Update", 'U'}, 9: {"Cancel", 'X'} 10: }; 11:  12: // the only public interface into the dictionary is for reading, so inherently thread-safe 13: public static char? Translate(string orderType) 14: { 15: char charValue; 16: if (_translator.TryGetValue(orderType, out charValue)) 17: { 18: return charValue; 19: } 20:  21: return null; 22: } 23: } Unfortunately, most of our computer science problems cannot get by with just single-threaded applications or with multi-threading in a load-once manner.  Looking at  today's trends, it's clear to see that computers are not so much getting faster because of faster processor speeds -- we've nearly reached the limits we can push through with today's technologies -- but more because we're adding more cores to the boxes.  With this new hardware paradigm, it is even more important to use multi-threaded applications to take full advantage of parallel processing to achieve higher application speeds. So let's look at how to use collections in a thread-safe manner. Using historical collections in a concurrent fashion The early .NET collections (System.Collections) had a Synchronized() static method that could be used to wrap the early collections to make them completely thread-safe.  This paradigm was dropped in the generic collections (System.Collections.Generic) because having a synchronized wrapper resulted in atomic locks for all operations, which could prove overkill in many multithreading situations.  Thus the paradigm shifted to having the user of the collection specify their own locking, usually with an external object: 1: public class OrderAggregator 2: { 3: private static readonly Dictionary<string, List<Order>> _orders = new Dictionary<string, List<Order>>(); 4: private static readonly _orderLock = new object(); 5:  6: public void Add(string accountNumber, Order newOrder) 7: { 8: List<Order> ordersForAccount; 9:  10: // a complex operation like this should all be protected 11: lock (_orderLock) 12: { 13: if (!_orders.TryGetValue(accountNumber, out ordersForAccount)) 14: { 15: _orders.Add(accountNumber, ordersForAccount = new List<Order>()); 16: } 17:  18: ordersForAccount.Add(newOrder); 19: } 20: } 21: } Notice how we’re performing several operations on the dictionary under one lock.  With the Synchronized() static methods of the early collections, you wouldn’t be able to specify this level of locking (a more macro-level).  So in the generic collections, it was decided that if a user needed synchronization, they could implement their own locking scheme instead so that they could provide synchronization as needed. The need for better concurrent access to collections Here’s the problem: it’s relatively easy to write a collection that locks itself down completely for access, but anything more complex than that can be difficult and error-prone to write, and much less to make it perform efficiently!  For example, what if you have a Dictionary that has frequent reads but in-frequent updates?  Do you want to lock down the entire Dictionary for every access?  This would be overkill and would prevent concurrent reads.  In such cases you could use something like a ReaderWriterLockSlim which allows for multiple readers in a lock, and then once a writer grabs the lock it blocks all further readers until the writer is done (in a nutshell).  This is all very complex stuff to consider. Fortunately, this is where the Concurrent Collections come in.  The Parallel Computing Platform team at Microsoft went through great pains to determine how to make a set of concurrent collections that would have the best performance characteristics for general case multi-threaded use. Now, as in all things involving threading, you should always make sure you evaluate all your container options based on the particular usage scenario and the degree of parallelism you wish to acheive. This article should not be taken to understand that these collections are always supperior to the generic collections. Each fills a particular need for a particular situation. Understanding what each container is optimized for is key to the success of your application whether it be single-threaded or multi-threaded. General points to consider with the concurrent collections The MSDN points out that the concurrent collections all support the ICollection interface. However, since the collections are already synchronized, the IsSynchronized property always returns false, and SyncRoot always returns null.  Thus you should not attempt to use these properties for synchronization purposes. Note that since the concurrent collections also may have different operations than the traditional data structures you may be used to.  Now you may ask why they did this, but it was done out of necessity to keep operations safe and atomic.  For example, in order to do a Pop() on a stack you have to know the stack is non-empty, but between the time you check the stack’s IsEmpty property and then do the Pop() another thread may have come in and made the stack empty!  This is why some of the traditional operations have been changed to make them safe for concurrent use. In addition, some properties and methods in the concurrent collections achieve concurrency by creating a snapshot of the collection, which means that some operations that were traditionally O(1) may now be O(n) in the concurrent models.  I’ll try to point these out as we talk about each collection so you can be aware of any potential performance impacts.  Finally, all the concurrent containers are safe for enumeration even while being modified, but some of the containers support this in different ways (snapshot vs. dirty iteration).  Once again I’ll highlight how thread-safe enumeration works for each collection. ConcurrentStack<T>: The thread-safe LIFO container The ConcurrentStack<T> is the thread-safe counterpart to the System.Collections.Generic.Stack<T>, which as you may remember is your standard last-in-first-out container.  If you think of algorithms that favor stack usage (for example, depth-first searches of graphs and trees) then you can see how using a thread-safe stack would be of benefit. The ConcurrentStack<T> achieves thread-safe access by using System.Threading.Interlocked operations.  This means that the multi-threaded access to the stack requires no traditional locking and is very, very fast! For the most part, the ConcurrentStack<T> behaves like it’s Stack<T> counterpart with a few differences: Pop() was removed in favor of TryPop() Returns true if an item existed and was popped and false if empty. PushRange() and TryPopRange() were added Allows you to push multiple items and pop multiple items atomically. Count takes a snapshot of the stack and then counts the items. This means it is a O(n) operation, if you just want to check for an empty stack, call IsEmpty instead which is O(1). ToArray() and GetEnumerator() both also take snapshots. This means that iteration over a stack will give you a static view at the time of the call and will not reflect updates. Pushing on a ConcurrentStack<T> works just like you’d expect except for the aforementioned PushRange() method that was added to allow you to push a range of items concurrently. 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: // but you can also push multiple items in one atomic operation (no interleaves) 7: stack.PushRange(new [] { "Second", "Third", "Fourth" }); For looking at the top item of the stack (without removing it) the Peek() method has been removed in favor of a TryPeek().  This is because in order to do a peek the stack must be non-empty, but between the time you check for empty and the time you execute the peek the stack contents may have changed.  Thus the TryPeek() was created to be an atomic check for empty, and then peek if not empty: 1: // to look at top item of stack without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (stack.TryPeek(out item)) 5: { 6: Console.WriteLine("Top item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Stack was empty."); 11: } Finally, to remove items from the stack, we have the TryPop() for single, and TryPopRange() for multiple items.  Just like the TryPeek(), these operations replace Pop() since we need to ensure atomically that the stack is non-empty before we pop from it: 1: // to remove items, use TryPop or TryPopRange to get multiple items atomically (no interleaves) 2: if (stack.TryPop(out item)) 3: { 4: Console.WriteLine("Popped " + item); 5: } 6:  7: // TryPopRange will only pop up to the number of spaces in the array, the actual number popped is returned. 8: var poppedItems = new string[2]; 9: int numPopped = stack.TryPopRange(poppedItems); 10:  11: foreach (var theItem in poppedItems.Take(numPopped)) 12: { 13: Console.WriteLine("Popped " + theItem); 14: } Finally, note that as stated before, GetEnumerator() and ToArray() gets a snapshot of the data at the time of the call.  That means if you are enumerating the stack you will get a snapshot of the stack at the time of the call.  This is illustrated below: 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: var results = stack.GetEnumerator(); 7:  8: // but you can also push multiple items in one atomic operation (no interleaves) 9: stack.PushRange(new [] { "Second", "Third", "Fourth" }); 10:  11: while(results.MoveNext()) 12: { 13: Console.WriteLine("Stack only has: " + results.Current); 14: } The only item that will be printed out in the above code is "First" because the snapshot was taken before the other items were added. This may sound like an issue, but it’s really for safety and is more correct.  You don’t want to enumerate a stack and have half a view of the stack before an update and half a view of the stack after an update, after all.  In addition, note that this is still thread-safe, whereas iterating through a non-concurrent collection while updating it in the old collections would cause an exception. ConcurrentQueue<T>: The thread-safe FIFO container The ConcurrentQueue<T> is the thread-safe counterpart of the System.Collections.Generic.Queue<T> class.  The concurrent queue uses an underlying list of small arrays and lock-free System.Threading.Interlocked operations on the head and tail arrays.  Once again, this allows us to do thread-safe operations without the need for heavy locks! The ConcurrentQueue<T> (like the ConcurrentStack<T>) has some departures from the non-concurrent counterpart.  Most notably: Dequeue() was removed in favor of TryDequeue(). Returns true if an item existed and was dequeued and false if empty. Count does not take a snapshot It subtracts the head and tail index to get the count.  This results overall in a O(1) complexity which is quite good.  It’s still recommended, however, that for empty checks you call IsEmpty instead of comparing Count to zero. ToArray() and GetEnumerator() both take snapshots. This means that iteration over a queue will give you a static view at the time of the call and will not reflect updates. The Enqueue() method on the ConcurrentQueue<T> works much the same as the generic Queue<T>: 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5: queue.Enqueue("Second"); 6: queue.Enqueue("Third"); For front item access, the TryPeek() method must be used to attempt to see the first item if the queue.  There is no Peek() method since, as you’ll remember, we can only peek on a non-empty queue, so we must have an atomic TryPeek() that checks for empty and then returns the first item if the queue is non-empty. 1: // to look at first item in queue without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (queue.TryPeek(out item)) 5: { 6: Console.WriteLine("First item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Queue was empty."); 11: } Then, to remove items you use TryDequeue().  Once again this is for the same reason we have TryPeek() and not Peek(): 1: // to remove items, use TryDequeue. If queue is empty returns false. 2: if (queue.TryDequeue(out item)) 3: { 4: Console.WriteLine("Dequeued first item " + item); 5: } Just like the concurrent stack, the ConcurrentQueue<T> takes a snapshot when you call ToArray() or GetEnumerator() which means that subsequent updates to the queue will not be seen when you iterate over the results.  Thus once again the code below will only show the first item, since the other items were added after the snapshot. 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5:  6: var iterator = queue.GetEnumerator(); 7:  8: queue.Enqueue("Second"); 9: queue.Enqueue("Third"); 10:  11: // only shows First 12: while (iterator.MoveNext()) 13: { 14: Console.WriteLine("Dequeued item " + iterator.Current); 15: } Using collections concurrently You’ll notice in the examples above I stuck to using single-threaded examples so as to make them deterministic and the results obvious.  Of course, if we used these collections in a truly multi-threaded way the results would be less deterministic, but would still be thread-safe and with no locking on your part required! For example, say you have an order processor that takes an IEnumerable<Order> and handles each other in a multi-threaded fashion, then groups the responses together in a concurrent collection for aggregation.  This can be done easily with the TPL’s Parallel.ForEach(): 1: public static IEnumerable<OrderResult> ProcessOrders(IEnumerable<Order> orderList) 2: { 3: var proxy = new OrderProxy(); 4: var results = new ConcurrentQueue<OrderResult>(); 5:  6: // notice that we can process all these in parallel and put the results 7: // into our concurrent collection without needing any external locking! 8: Parallel.ForEach(orderList, 9: order => 10: { 11: var result = proxy.PlaceOrder(order); 12:  13: results.Enqueue(result); 14: }); 15:  16: return results; 17: } Summary Obviously, if you do not need multi-threaded safety, you don’t need to use these collections, but when you do need multi-threaded collections these are just the ticket! The plethora of features (I always think of the movie The Three Amigos when I say plethora) built into these containers and the amazing way they acheive thread-safe access in an efficient manner is wonderful to behold. Stay tuned next week where we’ll continue our discussion with the ConcurrentBag<T> and the ConcurrentDictionary<TKey,TValue>. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here.   Tweet Technorati Tags: C#,.NET,Concurrent Collections,Collections,Multi-Threading,Little Wonders,BlackRabbitCoder,James Michael Hare

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • CodePlex Daily Summary for Tuesday, January 11, 2011

    CodePlex Daily Summary for Tuesday, January 11, 2011Popular ReleasesArcGIS Editor for OpenStreetMap: ArcGIS Editor for OpenStreetMap 1.1 beta2: This is the beta2 release for the ArcGIS Editor for OpenStreetMap version 1.1. Changes from version 1.0: Multi-part geometries are now supported. Homogeneous relations (consisting of only lines or only polygons) are converted into the appropriate multi-part geometry. Mixed relations and super relations are maintained and tracked in a stand-alone relation table. The underlying editing logic has changed. As opposed to tracking the editing changes upon "Save edit" or "Stop edit" the changes a...VSSpeedster - Parallel Builds for VS: VSSpeedster 1.2 (beta): - Improved Parallel Builds - Cancel running Parallel Build using Ctrl+BreakASP.NET Comet Ajax Library (Reverse Ajax - Server Push): Multiple server ASP.NET Reverse Ajax: This sample project demonstrates how is easy to scale your web applications via PokeInHawkeye - The .Net Runtime Object Editor: Hawkeye 1.2.5: In the case you are running an x86 Windows and you installed Release 1.2.4, you should consider upgrading to this release (1.2.5) as it appears Hawkeye is broken on x86 OS. I apologize for the inconvenience, but it appears Hawkeye 1.2.4 (and probably previous versions) doesn't run on x86 Windows (See issue http://hawkeye.codeplex.com/workitem/7791). This maintenance release fixes this broken behavior. This release comes in two flavors: Hawkeye.125.N2 is the standard .NET 2 build, was compile...Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (January 2011): Another release build for daily use; it contains many new features, enhanced compatibility with latest PHP opensource applications and several issue fixes. To improve the performance of your application using MySQL, please use Managed MySQL Extension for Phalanger. Changes made within this release include following: New features available only in Phalanger. Full support of Multi-Script-Assemblies was implemented; you can build your application into several DLLs now. Deploy them separately t...EnhSim: EnhSim 2.3.0: 2.3.0This release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Changed how flame shoc...AutoLoL: AutoLoL v1.5.3: A message will be displayed when there's an update available Shows a list of recent mastery files in the Editor Tab (requested by quite a few people) Updater: Update information is now scrollable Added a buton to launch AutoLoL after updating is finished Updated the UI to match that of AutoLoL Fix: Detects and resolves 'Read Only' state on Version.xmlExtended WPF Toolkit: Extended WPF Toolkit - 1.3.0: What's in the 1.3.0 Release?BusyIndicator ButtonSpinner ChildWindow ColorPicker - Updated (Breaking Changes) DateTimeUpDown - New Control Magnifier - New Control MaskedTextBox - New Control MessageBox NumericUpDown RichTextBox RichTextBoxFormatBar - Updated .NET 3.5 binaries and SourcePlease note: The Extended WPF Toolkit 3.5 is dependent on .NET Framework 3.5 and the WPFToolkit. You must install .NET Framework 3.5 and the WPFToolkit in order to use any features in the To...sNPCedit: sNPCedit v0.9d: added elementclient coordinate catcher to catch coordinates select a target (ingame) i.e. your char, npc or monster than click the button and coordinates+direction will be transfered to the selected row in the table corrected labels from Rot to Direction (because it is a vector)Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.7 beta Released: Hi, Today Visifire is released along with one new feature * Inlines property has been implemented in Title. Now onwards you can customize the text content in Title. Please check out the Visifire documentation for more information. This release contains fix for the following bugs: * Styles for chart elements were not working as expected. * Bar chart was not drawn properly if AxisMinimum property was set to a value above zero base line. * In DateTime axis, AxisLables were no...Ionics Isapi Rewrite Filter: 2.1 latest stable: V2.1 is stable, and is in maintenance mode. This is v2.1.1.25. It is a bug-fix release. There are no new features. 28629 29172 28722 27626 28074 29164 27659 27900 many documentation updates and fixes proper x64 build environment. This release includes x64 binaries in zip form, but no x64 MSI file. You'll have to manually install x64 servers, following the instructions in the documentation.StyleCop for ReSharper: StyleCop for ReSharper 5.1.14980.000: A considerable amount of work has gone into this release: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new ObjectBasedEnvironment object does not resolve the StyleCop installation path, thus it does not return the correct path ...VivoSocial: VivoSocial 7.4.1: New release with bug fixes and updates for performance.UltimateJB: Ultimate JB 2.03 PL3 KAKAROTO + HERMES + Spoof 3.5: Voici une version attendu avec impatience pour beaucoup : - La version PL3 KAKAROTO intégre ses dernières modification et intégre maintenant le firmware 2.43 !!! Conclusion : - UltimateJB203PSXXXDEFAULTKAKAROTO=> Pas de spoof mais disponible pour les PS3 suivantes : 3.41_kiosk 3.41 3.40 3.30 3.21 3.15 3.10 3.01 2.76 2.70 2.60 2.53 2.43 - UltimateJB203PS341_HERMES => Pas de spoof mais version hermes 4b - UltimateJB203PS341HERMESSPOOF35X => hermes 4b + spoof des firmwares 3.50 et 3.55 au li....NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.03: Added lot's of new extensions and new projects for MVC and Entity Framework. object.FindTypeByRecursion Int32.InRange String.RemoveAllSpecialCharacters String.IsEmptyOrWhiteSpace String.IsNotEmptyOrWhiteSpace String.IfEmptyOrWhiteSpace String.ToUpperFirstLetter String.GetBytes String.ToTitleCase String.ToPlural DateTime.GetDaysInYear DateTime.GetPeriodOfDay IEnumberable.RemoveAll IEnumberable.Distinct ICollection.RemoveAll IList.Join IList.Match IList.Cast Array.IsNullOrEmpty Array.W...EFMVC - ASP.NET MVC 3 and EF Code First: EFMVC 0.5- ASP.NET MVC 3 and EF Code First: Demo web app ASP.NET MVC 3, Razor and EF Code FirstVidCoder: 0.8.0: Added x64 version. Made the audio output preview more detailed and accurate. If the chosen encoder or mixdown is incompatible with the source, the fallback that will be used is displayed. Added "Auto" to the audio mixdown choices. Reworked non-anamorphic size calculation to work better with non-standard pixel aspect ratios and cropping. Reworked Custom anamorphic to be more intuitive and allow display width to be set automatically (Thanks, Statick). Allowing higher bitrates for 6-ch....NET Voice Recorder: Auto-Tune Release: This is the source code and binaries to accompany the article on the Coding 4 Fun website. It is the Auto Tuner release of the .NET Voice Recorder application.BloodSim: BloodSim - 1.3.2.0: - Simulation Log is now automatically disabled and hidden when running 10 or more iterations - Hit and Expertise are now entered by Rating, and include option for a Racial Expertise bonus - Added option for boss to use a periodic magic ability (Dragon Breath) - Added option for boss to periodically Enrage, gaining a Damage/Attack Speed buffASP.NET MVC CMS ( Using CommonLibrary.NET ): CommonLibrary.NET CMS 0.9.5 Alpha: CommonLibrary CMSA simple yet powerful CMS system in ASP.NET MVC 2 using C# 4.0. ActiveRecord based components for Blogs, Widgets, Pages, Parts, Events, Feedback, BlogRolls, Links Includes several widgets ( tag cloud, archives, recent, user cloud, links twitter, blog roll and more ) Built using the http://commonlibrarynet.codeplex.com framework. ( Uses TDD, DDD, Models/Entities, Code Generation ) Can run w/ In-Memory Repositories or Sql Server Database See Documentation tab for Ins...New Projects.NET Event Spy: Full information available here: http://martincarolan.blogspot.com/2011/01/secret-project.html Simple development/debugging tool that hooks into and monitors events raised on any .NET object3DTweet: 3Dtweet is an effort to make tweets appear in a aesthetic manner to the users of windows phone.Its developed using VS2010 expresss.Agile .NET with SCRUM and XP: Source code for the book Apress Professional Agile .NET Development with SCRUM and XPBeskid Niski Agroturystyka: Travel Poland, turystyka w beskidzie niskim. Agroturystyka w miejscowosci Losie nad zalewem KlimkowkaCoding better: A better coding labs for .net new feature.FBApp: A simple facebook app I was busy with over the holidays as an experiment to try out the facebook api. Is currently not complete but I wanted to get some criticism on it for my 1st web app. It is developed using WPF and C#. Freemium Helper for WebMatrix: The Freemium Helper for WebMatrix provides an easy way to apply the Freemium model into your WebMatrix site. Using different user groups (or roles), it allows you to easily enable or disable features on your pages depending on the stock-keeping unit the user has paid for.Haversine Distance Calculation: A very small project that implements the Haversine formula, which calculates the great circle distance between two points on the earth's surface. The points are latitude / longitude coordinates in DD. The formula is implemented client side with javascript and server side with C#.Hexa.Core: Hexa.Core is our implementation of the Domain Driven Design Architecture. Also providing a set of helper classes for ASP.Net and WCF development.Minecraft NBT reader: A simple Minecraft NBT reader.MobSoft: MobSoft is silverlight based news related application designed to test the new functionalitities in Silverlight 4netduino Helpers: The 'netduino Helpers' is a C# driver set for common hardware components and features convenient wrappers around complex .Net Micro Framework features such as: Analog joysticks, Real-time clock, 8*8 LED matrix, Shift register, runtime assembly & resource loader, bitmaps, etc.NewsGator Social Connectors for Sharepoint 2010: This project contains social connectors for the NewsGator SharePoint platform and supports sending messages to Twitter and LinkedIn just by putting tags in the text #li for sending to linkedin #tw for sending to twitterNon Profit Contact Relationship Management: A non profit contact relationship management software intended to help those in the non profit arena manage donors, sponsors, and prospects.OpenAGE: OpenAGE, short for Open Advance Game Engine, is aimed at developing a new Advanced Game engine strictly for the PC and Xbox360 gaming System using XNA 4.0, and Visual Studio 2010OpenAutoPoster: OpenAutoPoster automates some of the boring everyday tasks of aggregating, linking and posting that haunts content creators.Phefer WoodTurning Sketcher: Draw out your own turnings before you hit the wood. Import images and trace around them, print them out with the length and width measuresments.Simple Script Interpreter- A simple GPLEX/GPPG (Lex/Yacc) Primer: Simple Script is a simple implementation of an interpreter language built with GPlex and Gppg (Lex/Yacc). It's developed in C#.SP2010Tutorials: Code for learning SharePoint 2010The Social Developer: This is a social developer tool for programmers to create and share projects using the .Net framework and other technologies and integrate it into a socialistic approach of sharing the work load and the resources needed to develop high level applications. Traffic-sign Classification: Traffic sign shape classification and localization.unnamedyet: Experimental! para Investigadores de Sistemas. Objetivo! desarrollar una praxis tal que con un conjunto finito y discreto de términos para describir sea posible auto-demostrar y ejecutar cualquier proposición dada.VSSpeedster - Parallel Builds for VS: Improve the performance of your Visual Studio: - Parallel Builds integrated in visual studioWebservice Xslt Transformer WebPart for SharePoint 2010: The Dynamic Webservice Xslt Transformer WebPart makes it much easier for SharePoint Developers and Administrators to call the webservice and transform the results directly to HTML by providing their own custom xslt. The properties can be set on the webpart by using the UI.WilWaNet.HASH: An ASP.NET MVC web site designed for tracking nutrition for the purposes of losing weight. Tracks calories, fat calories, fat grams and saturated fat along with daily weight and exercise. Includes daily Basic Metabolic Rate calculation and graphing functions.WP7 Try it 01: The first try in wp7WPF TryIt 01: Quan ly Nhan khau WPF ApplicationWX Alerter CAP/XML: NWS Alerter using CAP 1.1 alerting protocol. The goal of this project is to consume weather alerts from the NWS site. The user will select the city or SAME code/zone to watch. As alerts trigger notices will display and info will fill the Alert Tab.

    Read the article

  • CodePlex Daily Summary for Saturday, February 19, 2011

    CodePlex Daily Summary for Saturday, February 19, 2011Popular ReleasesAdvanced Explorer for Wp7: Advanced Explorer for Wp7 Version 1.4 Test8: Added option to run under Lockscreen. Fixed a bug when you open a pdf/mobi file without starting adobe reader/amazon kindle first boost loading time for folders added \Windows directory (all devices) you can now interact with the filesystem while it is loading!Game Files Open - Map Editor: Game Files Open - Map Editor Beta 2 v1.0.0.0: The 2° beta release of the Map Editor, we have fixed a big bug of the files regen.Document.Editor: 2011.6: Whats new for Document.Editor 2011.6: New Left to Right and Left to Right support New Indent more/less support Improved Home tab Improved Tooltips/shortcut keys Minor Bug Fix's, improvements and speed upsCatel - WPF and Silverlight MVVM library: 1.2: Catel history ============= (+) Added (*) Changed (-) Removed (x) Error / bug (fix) For more information about issues or new feature requests, please visit: http://catel.codeplex.com =========== Version 1.2 =========== Release date: ============= 2011/02/17 Added/fixed: ============ (+) DataObjectBase now supports Isolated Storage out of the box: Person.Save(myStream) stores a whole object graph in Silverlight (+) DataObjectBase can now be converted to Json via Person.ToJson(); (+)...??????????: All-In-One Code Framework ??? 2011-02-18: ?????All-In-One Code Framework?2011??????????!!http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ?????,?????AzureBingMaps??????,??Azure,WCF, Silverlight, Window Phone????????,????????????????????????。 ???: Windows Azure SQL Azure Windows Azure AppFabric Windows Live Messenger Connect Bing Maps ?????: ??????HTML??? ??Windows PC?Mac?Silverlight??? ??Windows Phone?Silverlight??? ?????:http://blog.csdn.net/sjb5201/archive/2011...Image.Viewer: 2011: First version of 2011Silverlight Toolkit: Silverlight for Windows Phone Toolkit - Feb 2011: Silverlight for Windows Phone Toolkit OverviewSilverlight for Windows Phone Toolkit offers developers additional controls for Windows Phone application development, designed to match the rich user experience of the Windows Phone 7. Suggestions? Features? Questions? Ask questions in the Create.msdn.com forum. Add bugs or feature requests to the Issue Tracker. Help us shape the Silverlight Toolkit with your feedback! Please clearly indicate that the work items and issues are for the phone t...VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 29 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) Build 29 (beta)New: Added VsTortoise Solution Explorer integration for Web Project Folder, Web Folder and Web Item. Fix: TortoiseProc was called with invalid parameters, when using TSVN 1.4.x or older #7338 (thanks psifive) Fix: Add-in does not work, when "TortoiseSVN/bin" is not added to PATH environment variable #7357 Fix: Missing error message when ...Sense/Net CMS - Enterprise Content Management: SenseNet 6.0.3 Community Edition: Sense/Net 6.0.3 Community Edition We are happy to introduce you the latest version of Sense/Net with integrated ECM Workflow capabilities! In the past weeks we have been working hard to migrate the product to .Net 4 and include a workflow framework in Sense/Net built upon Windows Workflow Foundation 4. This brand new feature enables developers to define and develop workflows, and supports users when building and setting up complicated business processes involving content creation and response...thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.11): Features Added a new option that allows properties on data contract types to be marked as virtual. Bug Fixes Fixed a bug caused by certain project properties not being available on Web Service Software Factory projects. Fixed a bug that could result in the WrapperName value of the MessageContractAttribute being incorrect when the Adjust Casing option is used. The menu item code now caters for CommandBar instances that are not available. For example the Web Item CommandBar does not exist ...Terminals: Version 2 - RC1: The "Clean Install" will overwrite your log4net configuration (if you have one). If you run in a Portable Environment, you can use the "Clean Install" and target your portable folder. Tested and it works fine. Changes for this release: Re-worked on the Toolstip settings are done, just to avoid the vs.net clash with auto-generating files for .settings files. renamed it to .settings.config Packged both log4net and ToolStripSettings files into the installer Upgraded the version inform...AllNewsManager.NET: AllNewsManager.NET 1.3: AllNewsManager.NET 1.3. This new version provide several new features, improvements and bug fixes. Some new features: Online Users. Avatars. Copy function (to create a new article from another one). SEO improvements (friendly urls). New admin buttons. And more...Facebook Graph Toolkit: Facebook Graph Toolkit 0.8: Version 0.8 (15 Feb 2011)moved to Beta stage publish photo feature "email" field of User object added new Graph Api object: Group, Event new Graph Api connection: likes, groups, eventsDJME - The jQuery extensions for ASP.NET MVC: DJME2 -The jQuery extensions for ASP.NET MVC beta2: The source code and runtime library for DJME2. For more product info you can goto http://www.dotnetage.com/djme.html What is new ?The Grid extension added The ModelBinder added which helping you create Bindable data Action. The DnaFor() control factory added that enabled Model bindable extensions. Enhance the ListBox , ComboBox data binding.Jint - Javascript Interpreter for .NET: Jint - 0.9.0: New CLR interoperability features Many bugfixesBuild Version Increment Add-In Visual Studio: Build Version Increment v2.4.11046.2045: v2.4.11046.2045 Fixes and/or Improvements:Major: Added complete support for VC projects including .vcxproj & .vcproj. All padding issues fixed. A project's assembly versions are only changed if the project has been modified. Minor Order of versioning style values is now according to their respective positions in the attributes i.e. Major, Minor, Build, Revision. Fixed issue with global variable storage with some projects. Fixed issue where if a project item's file does not exist, a ...Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.1: Coding4Fun.Phone.Toolkit v1.1 release. Bug fixes and minor feature requests addedTV4Home - The all-in-one TV solution!: 0.1.0.0 Preview: This is the beta preview release of the TV4Home software.Finestra Virtual Desktops: 1.2: Fixes a few minor issues with 1.1 including the broken per-desktop backgrounds Further improves the speed of switching desktops A few UI performance improvements Added donations linksNuGet: NuGet 1.1: NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. The URL to the package OData feed is: http://go.microsoft.com/fwlink/?LinkID=206669 To see the list of issues fixed in this release, visit this our issues list A...New ProjectsComplexityEvolution: Research projectCRM 2011 Metadata Browser: The CRM 2011 Metadata Browser is a Silverlight 4 application that is packaged as a Managed CRM Solution. This tool allows you to view metadata including Entities, Attributes and Relationships. The 2011 SOAP endpoint is used to connect to CRM using the Organization.svc/web serviceEFCFvsNH3: A sample project that shows the main differences between Entity Framework Code First and Nhibernate 3: -Mapping -Configuration -DB Initialization -Query API -Session & Transaction -ValidationE-Teacher for IELTS preparation: E-teacher helps IELTS students prepare for the IELTS Academic and General Training test. Qualified English Teachers can register to the e-community and helps candidates to understand what they really need to improve for the IELTS exam and how to reach for the maximum band score.FIM CM Extensions: Extensions for Forefront Identity Manager 2010 to enable integration between the FIM Service workflow and the FIM Certificate Management workflow. Game Files Open - Map Editor: This is a map editor for the metin2 clients, it's very simple edit or create a map with this tool.Garbage Collection Sample Code: Garbage collection sample code demonstrates the differences between the large and small object heaps. This code supports the blog post at http://www.deepcode.co.ukGardenersWorld: The aim of gardenersWorld community website is to provide a platform for budding gardenening enthusiasts, hobbyists and professionals to share information. Harvester - Debug Monitor for Log4Net and NLog: Harvester enables you to monitor all Win32 debug output from all applications running on your machine. Watch real time Log4Net and NLog output across multiple applications at the same time. Trace a call from client to server and back without having to look at multiple log files.Hjelp! Jeg skal ha farmakokinetikk-eksamen!: Sliter du med å pugge formler til farmakokinetikk-eksamenen? Da er redningen din her! :DMercury Business Framework: Mercury Business Framework is a project set up to define basic objects used by the vast majority of business and non business software. The idea is to define the low level objects required by most applications on the web and desktop.MyDistrictBuilder: MyDistrictBuilder allows anybody to build legislative districts and submit to the Florida House of Rep. It is built on Bing Maps, Silverlight and AZURE. Written in C#. It is written to allow anyone to adapt for any states census geography. www.floridaredistricting.cloudapp.netMySchoolApp: MySchoolApp is a customizable application written in Visual Basic and C# for the Windows Mobile Phone 7 platform using Visual Studio Professional 2010. The application combines links to RSS and Web sites about a school, and displays a map and local weather. Osm Parser Community Edition: Osm Parser parse highways in open street maps to generate routable roads network in spatialite.PRBox Cloud Website: This is the website base for PRBox.com SEO Reporter : open source search engine optimization software: SEO Reporter is an open source search engine optimization application for detecting HTML related SEO violations, gathering data about a Web page and analyzing its keywords strategy. It's a Windows navigation application developed in F#. Setting timeout for SharePoint 2010 Silverlight web part: This web part overwrites 5sec hard-coded timeout for SharePoint 2010 Silverlight web part.SharePoint 2010 Central Administration Automatic Resources Link Generator: This feature will automatically generate the resources links list (Quick Links) in your SharePoint 2010 Central Administration site making it easier for SharePoint Admins to navigate through the common Central Administration activities without populating it themselves - VS2010/c#SharePoint Holiday Loader: SharePoint Holiday Loader allows you to quickly import public holidays into a SharePoint calendar from the standard .HOL format.Sohu?????: ?????????WPF?????????????,????????????(??、??、???),??、??、???????,????????????,??????????????。 ??V1????????,V2?????????????????。SP2010 Form Manipulator: This project will hopefully make it easier to manipulate the list form in SharePoint 2010.SPRotator: A jQuery powered web part for SharePoint that cycles through any type of list.SQL Script to Create a Website Directory: Here you can download sql script to create a website directory using SQL Server. * This is only the easy directory sql script to develop your website. Directory software may publish in future.Sri Hits Zone: This is an online repository which used to share Sri Lankan music. This is to provide Sri Lankans who living abroad to being touches with Sri Lankan artist and their music. testz: testzTime domain dissipative acoustic problem: tddapWindows Azure Hosted Services VM Manager: Windows Azure Hosted Services VM Manager is a Windows Service that can manage the number of hosted services running in Azure by either a time based schedule or by CPU load. This allows your service to scale for either dynamic load or a known schedule. Z80TR: Z80TR

    Read the article

  • Rounded Corners and Shadows &ndash; Dialogs with CSS

    - by Rick Strahl
    Well, it looks like we’ve finally arrived at a place where at least all of the latest versions of main stream browsers support rounded corners and box shadows. The two CSS properties that make this possible are box-shadow and box-radius. Both of these CSS Properties now supported in all the major browsers as shown in this chart from QuirksMode: In it’s simplest form you can use box-shadow and border radius like this: .boxshadow { -moz-box-shadow: 3px 3px 5px #535353; -webkit-box-shadow: 3px 3px 5px #535353; box-shadow: 3px 3px 5px #535353; } .roundbox { -moz-border-radius: 6px 6px 6px 6px; -webkit-border-radius: 6px; border-radius: 6px 6px 6px 6px; } box-shadow: horizontal-shadow-pixels vertical-shadow-pixels blur-distance shadow-color box-shadow attributes specify the the horizontal and vertical offset of the shadow, the blur distance (to give the shadow a smooth soft look) and a shadow color. The spec also supports multiple shadows separated by commas using the attributes above but we’re not using that functionality here. box-radius: top-left-radius top-right-radius bottom-right-radius bottom-left-radius border-radius takes a pixel size for the radius for each corner going clockwise. CSS 3 also specifies each of the individual corner elements such as border-top-left-radius, but support for these is much less prevalent so I would recommend not using them for now until support improves. Instead use the single box-radius to specify all corners. Browser specific Support in older Browsers Notice that there are two variations: The actual CSS 3 properties (box-shadow and box-radius) and the browser specific ones (-moz, –webkit prefixes for FireFox and Chrome/Safari respectively) which work in slightly older versions of modern browsers before official CSS 3 support was added. The goal is to spread support as widely as possible and the prefix versions extend the range slightly more to those browsers that provided early support for these features. Notice that box-shadow and border-radius are used after the browser specific versions to ensure that the latter versions get precedence if the browser supports both (last assignment wins). Use the .boxshadow and .roundbox Styles in HTML To use these two styles create a simple rounded box with a shadow you can use HTML like this: <!-- Simple Box with rounded corners and shadow --> <div class="roundbox boxshadow" style="width: 550px; border: solid 2px steelblue"> <div class="boxcontenttext"> Simple Rounded Corner Box. </div> </div> which looks like this in the browser: This works across browsers and it’s pretty sweet and simple. Watch out for nested Elements! There are a couple of things to be aware of however when using rounded corners. Specifically, you need to be careful when you nest other non-transparent content into the rounded box. For example check out what happens when I change the inside <div> to have a colored background: <!-- Simple Box with rounded corners and shadow --> <div class="roundbox boxshadow" style="width: 550px; border: solid 2px steelblue"> <div class="boxcontenttext" style="background: khaki;"> Simple Rounded Corner Box. </div> </div> which renders like this:   If you look closely you’ll find that the inside <div>’s corners are not rounded and so ‘poke out’ slightly over the rounded corners. It looks like the rounded corners are ‘broken’ up instead of a solid rounded line around the corner, which his pretty ugly. The bigger the radius the more drastic this effect becomes . To fix this issue the inner <div> also has have rounded corners at the same or slightly smaller radius than the outer <div>. The simple fix for this is to simply also apply the roundbox style to the inner <div> in addition to the boxcontenttext style already applied: <div class="boxcontenttext roundbox" style="background: khaki;"> The fixed display now looks proper: Separate Top and Bottom Elements This gets even a little more tricky if you have an element at the top or bottom only of the rounded box. What if you need to add something like a header or footer <div> that have non-transparent backgrounds which is a pretty common scenario? In those cases you want only the top or bottom corners rounded and not both. To make this work a couple of additional styles to round only the top and bottom corners can be created: .roundbox-top { -moz-border-radius: 4px 4px 0 0; -webkit-border-radius: 4px 4px 0 0; border-radius: 4px 4px 0 0; } .roundbox-bottom { -moz-border-radius: 0 0 4px 4px; -webkit-border-radius: 0 0 4px 4px; border-radius: 0 0 4px 4px; } Notice that radius used for the ‘inside’ rounding is smaller (4px) than the outside radius (6px). This is so the inner radius fills into the outer border – if you use the same size you may have some white space showing between inner and out rounded corners. Experiment with values to see what works – in my experimenting the behavior across browsers here is consistent (thankfully). These styles can be applied in addition to other styles to make only the top or bottom portions of an element rounded. For example imagine I have styles like this: .gridheader, .gridheaderbig, .gridheaderleft, .gridheaderright { padding: 4px 4px 4px 4px; background: #003399 url(images/vertgradient.png) repeat-x; text-align: center; font-weight: bold; text-decoration: none; color: khaki; } .gridheaderleft { text-align: left; } .gridheaderright { text-align: right; } .gridheaderbig { font-size: 135%; } If I just apply say gridheader by itself in HTML like this: <div class="roundbox boxshadow" style="width: 550px; border: solid 2px steelblue"> <div class="gridheaderleft">Box with a Header</div> <div class="boxcontenttext" style="background: khaki;"> Simple Rounded Corner Box. </div> </div> This results in a pretty funky display – again due to the fact that the inner elements render square rather than rounded corners: If you look close again you can see that both the header and the main content have square edges which jumps out at the eye. To fix this you can now apply the roundbox-top and roundbox-bottom to the header and content respectively: <div class="roundbox boxshadow" style="width: 550px; border: solid 2px steelblue"> <div class="gridheaderleft roundbox-top">Box with a Header</div> <div class="boxcontenttext roundbox-bottom" style="background: khaki;"> Simple Rounded Corner Box. </div> </div> Which now gives the proper display with rounded corners both on the top and bottom: All of this is sweet to be supported – at least by the newest browser – without having to resort to images and nasty JavaScripts solutions. While this is still not a mainstream feature yet for the majority of actually installed browsers, the majority of browser users are very likely to have this support as most browsers other than IE are actively pushing users to upgrade to newer versions. Since this is a ‘visual display only feature it degrades reasonably well in non-supporting browsers: You get an uninteresting square and non-shadowed browser box, but the display is still overall functional. The main sticking point – as always is Internet Explorer versions 8.0 and down as well as older versions of other browsers. With those browsers you get a functional view that is a little less interesting to look at obviously: but at least it’s still functional. Maybe that’s just one more incentive for people using older browsers to upgrade to a  more modern browser :-) Creating Dialog Related Styles In a lot of my AJAX based applications I use pop up windows which effectively work like dialogs. Using the simple CSS behaviors above, it’s really easy to create some fairly nice looking overlaid windows with nothing but CSS. Here’s what a typical ‘dialog’ I use looks like: The beauty of this is that it’s plain CSS – no plug-ins or images (other than the gradients which are optional) required. Add jQuery-ui draggable (or ww.jquery.js as shown below) and you have a nice simple inline implementation of a dialog represented by a simple <div> tag. Here’s the HTML for this dialog: <div id="divDialog" class="dialog boxshadow" style="width: 450px;"> <div class="dialog-header"> <div class="closebox"></div> User Sign-in </div> <div class="dialog-content"> <label>Username:</label> <input type="text" name="txtUsername" value=" " /> <label>Password</label> <input type="text" name="txtPassword" value=" " /> <hr /> <input type="button" id="btnLogin" value="Login" /> </div> <div class="dialog-statusbar">Ready</div> </div> Most of this behavior is driven by the ‘dialog’ styles which are fairly basic and easy to understand. They do use a few support images for the gradients which are provided in the sample I’ve provided. Here’s what the CSS looks like: .dialog { background: White; overflow: hidden; border: solid 1px steelblue; -moz-border-radius: 6px 6px 4px 4px; -webkit-border-radius: 6px 6px 4px 4px; border-radius: 6px 6px 3px 3px; } .dialog-header { background-image: url(images/dialogheader.png); background-repeat: repeat-x; text-align: left; color: cornsilk; padding: 5px; padding-left: 10px; font-size: 1.02em; font-weight: bold; position: relative; -moz-border-radius: 4px 4px 0px 0px; -webkit-border-radius: 4px 4px 0px 0px; border-radius: 4px 4px 0px 0px; } .dialog-top { -moz-border-radius: 4px 4px 0px 0px; -webkit-border-radius: 4px 4px 0px 0px; border-radius: 4px 4px 0px 0px; } .dialog-bottom { -moz-border-radius: 0 0 3px 3px; -webkit-border-radius: 0 0 3px 3px; border-radius: 0 0 3px 3px; } .dialog-content { padding: 15px; } .dialog-statusbar, .dialog-toolbar { background: #eeeeee; background-image: url(images/dialogstrip.png); background-repeat: repeat-x; padding: 5px; padding-left: 10px; border-top: solid 1px silver; border-bottom: solid 1px silver; font-size: 0.8em; } .dialog-statusbar { -moz-border-radius: 0 0 3px 3px; -webkit-border-radius: 0 0 3px 3px; border-radius: 0 0 3px 3px; padding-right: 10px; } .closebox { position: absolute; right: 2px; top: 2px; background-image: url(images/close.gif); background-repeat: no-repeat; width: 14px; height: 14px; cursor: pointer; opacity: 0.60; filter: alpha(opacity="80"); } .closebox:hover { opacity: 1; filter: alpha(opacity="100"); } The main style is the dialog class which is the outer box. It has the rounded border that serves as the outline. Note that I didn’t add the box-shadow to this style because in some situations I just want the rounded box in an inline display that doesn’t have a shadow so it’s still applied separately. dialog-header, then has the rounded top corners and displays a typical dialog heading format. dialog-bottom and dialog-top then provide the same functionality as roundbox-top and roundbox-bottom described earlier but are provided mainly in the stylesheet for consistency to match the dialog’s round edges and making it easier to  remember and find in Intellisense as it shows up in the same dialog- group. dialog-statusbar and dialog-toolbar are two elements I use a lot for floating windows – the toolbar serves for buttons and options and filters typically, while the status bar provides information specific to the floating window. Since the the status bar is always on the bottom of the dialog it automatically handles the rounding of the bottom corners. Finally there’s  closebox style which is to be applied to an empty <div> tag in the header typically. What this does is render a close image that is by default low-lighted with a low opacity value, and then highlights when hovered over. All you’d have to do handle the close operation is handle the onclick of the <div>. Note that the <div> right aligns so typically you should specify it before any other content in the header. Speaking of closable – some time ago I created a closable jQuery plug-in that basically automates this process and can be applied against ANY element in a page, automatically removing or closing the element with some simple script code. Using this you can leave out the <div> tag for closable and just do the following: To make the above dialog closable (and draggable) which makes it effectively and overlay window, you’d add jQuery.js and ww.jquery.js to the page: <script type="text/javascript" src="../../scripts/jquery.min.js"></script> <script type="text/javascript" src="../../scripts/ww.jquery.min.js"></script> and then simply call: <script type="text/javascript"> $(document).ready(function () { $("#divDialog") .draggable({ handle: ".dialog-header" }) .closable({ handle: ".dialog-header", closeHandler: function () { alert("Window about to be closed."); return true; // true closes - false leaves open } }); }); </script> * ww.jquery.js emulates base features in jQuery-ui’s draggable. If jQuery-ui is loaded its draggable version will be used instead and voila you have now have a draggable and closable window – here in mid-drag:   The dragging and closable behaviors are of course optional, but it’s the final touch that provides dialog like window behavior. Relief for older Internet Explorer Versions with CSS Pie If you want to get these features to work with older versions of Internet Explorer all the way back to version 6 you can check out CSS Pie. CSS Pie provides an Internet Explorer behavior file that attaches to specific CSS rules and simulates these behavior using script code in IE (mostly by implementing filters). You can simply add the behavior to each CSS style that uses box-shadow and border-radius like this: .boxshadow {     -moz-box-shadow: 3px 3px 5px #535353;     -webkit-box-shadow: 3px 3px 5px #535353;           box-shadow: 3px 3px 5px #535353;     behavior: url(scripts/PIE.htc);           } .roundbox {      -moz-border-radius: 6px 6px 6px 6px;     -webkit-border-radius: 6px;      border-radius: 6px 6px 6px 6px;     behavior: url(scripts/PIE.htc); } CSS Pie requires the PIE.htc on your server and referenced from each CSS style that needs it. Note that the url() for IE behaviors is NOT CSS file relative as other CSS resources, but rather PAGE relative , so if you have more than one folder you probably need to reference the HTC file with a fixed path like this: behavior: url(/MyApp/scripts/PIE.htc); in the style. Small price to pay, but a royal pain if you have a common CSS file you use in many applications. Once the PIE.htc file has been copied and you have applied the behavior to each style that uses these new features Internet Explorer will render rounded corners and box shadows! Yay! Hurray for box-shadow and border-radius All of this functionality is very welcome natively in the browser. If you think this is all frivolous visual candy, you might be right :-), but if you take a look on the Web and search for rounded corner solutions that predate these CSS attributes you’ll find a boatload of stuff from image files, to custom drawn content to Javascript solutions that play tricks with a few images. It’s sooooo much easier to have this functionality built in and I for one am glad to see that’s it’s finally becoming standard in the box. Still remember that when you use these new CSS features, they are not universal, and are not going to be really soon. Legacy browsers, especially old versions of Internet Explorer that can’t be updated will continue to be around and won’t work with this shiny new stuff. I say screw ‘em: Let them get a decent recent browser or see a degraded and ugly UI. We have the luxury with this functionality in that it doesn’t typically affect usability – it just doesn’t look as nice. Resources Download the Sample The sample includes the styles and images and sample page as well as ww.jquery.js for the draggable/closable example. Online Sample Check out the sample described in this post online. Closable and Draggable Documentation Documentation for the closeable and draggable plug-ins in ww.jquery.js. You can also check out the full documentation for all the plug-ins contained in ww.jquery.js here. © Rick Strahl, West Wind Technologies, 2005-2011Posted in HTML  CSS  

    Read the article

  • 10 Essential Tools for building ASP.NET Websites

    - by Stephen Walther
    I recently put together a simple public website created with ASP.NET for my company at Superexpert.com. I was surprised by the number of free tools that I ended up using to put together the website. Therefore, I thought it would be interesting to create a list of essential tools for building ASP.NET websites. These tools work equally well with both ASP.NET Web Forms and ASP.NET MVC. Performance Tools After reading Steve Souders two (very excellent) books on front-end website performance High Performance Web Sites and Even Faster Web Sites, I have been super sensitive to front-end website performance. According to Souders’ Performance Golden Rule: “Optimize front-end performance first, that's where 80% or more of the end-user response time is spent” You can use the tools below to reduce the size of the images, JavaScript files, and CSS files used by an ASP.NET application. 1. Sprite and Image Optimization Framework CSS sprites were first described in an article written for A List Apart entitled CSS sprites: Image Slicing’s Kiss of Death. When you use sprites, you combine multiple images used by a website into a single image. Next, you use CSS trickery to display particular sub-images from the combined image in a webpage. The primary advantage of sprites is that they reduce the number of requests required to display a webpage. Requesting a single large image is faster than requesting multiple small images. In general, the more resources – images, JavaScript files, CSS files – that must be moved across the wire, the slower your website. However, most people avoid using sprites because they require a lot of work. You need to combine all of the images and write just the right CSS rules to display the sub-images. The Microsoft Sprite and Image Optimization Framework enables you to avoid all of this work. The framework combines the images for you automatically. Furthermore, the framework includes an ASP.NET Web Forms control and an ASP.NET MVC helper that makes it easy to display the sub-images. You can download the Sprite and Image Optimization Framework from CodePlex at http://aspnet.codeplex.com/releases/view/50869. The Sprite and Image Optimization Framework was written by Morgan McClean who worked in the office next to mine at Microsoft. Morgan was a scary smart Intern from Canada and we discussed the Framework while he was building it (I was really excited to learn that he was working on it). Morgan added some great advanced features to this framework. For example, the Sprite and Image Optimization Framework supports something called image inlining. When you use image inlining, the actual image is stored in the CSS file. Here’s an example of what image inlining looks like: .Home_StephenWalther_small-jpg { width:75px; height:100px; background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEsAAABkCAIAAABB1lpeAAAAB GdBTUEAALGOfPtRkwAAACBjSFJNAACHDwAAjA8AAP1SAACBQAAAfXkAAOmLAAA85QAAGcxzPIV3AAAKL s+zNfREAAAAASUVORK5CYII=) no-repeat 0% 0%; } The actual image (in this case a picture of me that is displayed on the home page of the Superexpert.com website) is stored in the CSS file. If you visit the Superexpert.com website then very few separate images are downloaded. For example, all of the images with a red border in the screenshot below take advantage of CSS sprites: Unfortunately, there are some significant Gotchas that you need to be aware of when using the Sprite and Image Optimization Framework. There are workarounds for these Gotchas. I plan to write about these Gotchas and workarounds in a future blog entry. 2. Microsoft Ajax Minifier Whenever possible you should combine, minify, compress, and cache with a far future header all of your JavaScript and CSS files. The Microsoft Ajax Minifier makes it easy to minify JavaScript and CSS files. Don’t confuse minification and compression. You need to do both. According to Souders, you can reduce the size of a JavaScript file by an additional 20% (on average) by minifying a JavaScript file after you compress the file. When you minify a JavaScript or CSS file, you use various tricks to reduce the size of the file before you compress the file. For example, you can minify a JavaScript file by replacing long JavaScript variables names with short variables names and removing unnecessary white space and comments. You can minify a CSS file by doing such things as replacing long color names such as #ffffff with shorter equivalents such as #fff. The Microsoft Ajax Minifier was created by Microsoft employee Ron Logan. Internally, this tool was being used by several large Microsoft websites. We also used the tool heavily on the ASP.NET team. I convinced Ron to publish the tool on CodePlex so that everyone in the world could take advantage of it. You can download the tool from the ASP.NET Ajax website and read documentation for the tool here. I created the installer for the Microsoft Ajax Minifier. When creating the installer, I also created a Visual Studio build task to make it easy to minify all of your JavaScript and CSS files whenever you do a build within Visual Studio automatically. Read the Ajax Minifier Quick Start to learn how to configure the build task. 3. ySlow The ySlow tool is a free add-on for Firefox created by Yahoo that enables you to test the front-end of your website. For example, here are the current test results for the Superexpert.com website: The Superexpert.com website has an overall score of B (not perfect but not bad). The ySlow tool is not perfect. For example, the Superexpert.com website received a failing grade of F for not using a Content Delivery Network even though the website using the Microsoft Ajax Content Delivery Network for JavaScript files such as jQuery. Uptime After publishing a website live to the world, you want to ensure that the website does not encounter any issues and that it stays live. I use the following tools to monitor the Superexpert.com website now that it is live. 4. ELMAH ELMAH stands for Error Logging Modules and Handlers for ASP.NET. ELMAH enables you to record any errors that happen at your website so you can review them in the future. You can download ELMAH for free from the ELMAH project website. ELMAH works great with both ASP.NET Web Forms and ASP.NET MVC. You can configure ELMAH to store errors in a number of different stores including XML files, the Event Log, an Access database, a SQL database, an Oracle database, or in computer RAM. You also can configure ELMAH to email error messages to you when they happen. By default, you can access ELMAH by requesting the elmah.axd page from a website with ELMAH installed. Here’s what the elmah page looks like from the Superexpert.com website (this page is password-protected because secret information can be revealed in an error message): If you click on a particular error message, you can view the original Yellow Screen ASP.NET error message (even when the error message was never displayed to the actual user). I installed ELMAH by taking advantage of the new package manager for ASP.NET named NuGet (originally named NuPack). You can read the details about NuGet in the following blog entry by Scott Guthrie. You can download NuGet from CodePlex. 5. Pingdom I use Pingdom to verify that the Superexpert.com website is always up. You can sign up for Pingdom by visiting Pingdom.com. You can use Pingdom to monitor a single website for free. At the Pingdom website, you configure the frequency that your website gets pinged. I verify that the Superexpert.com website is up every 5 minutes. I have the Pingdom service verify that it can retrieve the string “Contact Us” from the website homepage. If your website goes down, you can configure Pingdom so that it sends an email, Twitter, SMS, or iPhone alert. I use the Pingdom iPhone app which looks like this: 6. Host Tracker If your website does go down then you need some way of determining whether it is a problem with your local network or if your website is down for everyone. I use a website named Host-Tracker.com to check how badly a website is down. Here’s what the Host-Tracker website displays for the Superexpert.com website when the website can be successfully pinged from everywhere in the world: Notice that Host-Tracker pinged the Superexpert.com website from 68 locations including Roubaix, France and Scranton, PA. Debugging I mean debugging in the broadest possible sense. I use the following tools when building a website to verify that I have not made a mistake. 7. HTML Spell Checker Why doesn’t Visual Studio have a built-in spell checker? Don’t know – I’ve always found this mysterious. Fortunately, however, a former member of the ASP.NET team wrote a free spell checker that you can use with your ASP.NET pages. I find a spell checker indispensible. It is easy to delude yourself that you are capable of perfect spelling. I’m always super embarrassed when I actually run the spell checking tool and discover all of my spelling mistakes. The fastest way to add the HTML Spell Checker extension to Visual Studio is to select the menu option Tools, Extension Manager within Visual Studio. Click on Online Gallery and search for HTML Spell Checker: 8. IIS SEO Toolkit If people cannot find your website through Google then you should not even bother to create it. Microsoft has a great extension for IIS named the IIS Search Engine Optimization Toolkit that you can use to identify issue with your website that would hurt its page rank. You also can use this tool to quickly create a sitemap for your website that you can submit to Google or Bing. You can even generate the sitemap for an ASP.NET MVC website. Here’s what the report overview for the Superexpert.com website looks like: Notice that the Sueprexpert.com website had plenty of violations. For example, there are 65 cases in which a page has a broken hyperlink. You can drill into these violations to identity the exact page and location where these violations occur. 9. LinqPad If your ASP.NET website accesses a database then you should be using LINQ to Entities with the Entity Framework. Using LINQ involves some magic. LINQ queries written in C# get converted into SQL queries for you. If you are not careful about how you write your LINQ queries, you could unintentionally build a really badly performing website. LinqPad is a free tool that enables you to experiment with your LINQ queries. It even works with Microsoft SQL CE 4 and Azure. You can use LinqPad to execute a LINQ to Entities query and see the results. You also can use it to see the resulting SQL that gets executed against the database: 10. .NET Reflector I use .NET Reflector daily. The .NET Reflector tool enables you to take any assembly and disassemble the assembly into C# or VB.NET code. You can use .NET Reflector to see the “Source Code” of an assembly even when you do not have the actual source code. You can download a free version of .NET Reflector from the Redgate website. I use .NET Reflector primarily to help me understand what code is doing internally. For example, I used .NET Reflector with the Sprite and Image Optimization Framework to better understand how the MVC Image helper works. Here’s part of the disassembled code from the Image helper class: Summary In this blog entry, I’ve discussed several of the tools that I used to create the Superexpert.com website. These are tools that I use to improve the performance, improve the SEO, verify the uptime, or debug the Superexpert.com website. All of the tools discussed in this blog entry are free. Furthermore, all of these tools work with both ASP.NET Web Forms and ASP.NET MVC. Let me know if there are any tools that you use daily when building ASP.NET websites.

    Read the article

  • Shrinking Windows Size to 0 on Cisco ASA

    - by Brent
    Having an issue with any large file transfer that crosses our Cisco ASA unit come to an eventual pause. Setup Test1: Server A, FileZilla Client <- 1GBPS - Cisco ASA <- 1 GBPS - Server B, FileZilla Server TCP Window size on large transfers will drop to 0 after around 30 seconds of a large file transfer. RDP session then becomes unresponsive for a minute or two and then is sporadic. After a minute or two, the FTP transfer resumes, but at 1-2 MB/s. When the FTP transfer is over, the responsiveness of the RDP session returns to normal. Test2: Server C in same network as Server B, FileZilla Client <- local network - Server B, FileZilla Server File will transfer at 30+ MB/s. Details ASA: 5520 running 8.3(1) with ASDM 6.3(1) Windows: Server 2003 R2 SP2 with latest patches Server: VMs running on HP C3000 blade chasis FileZilla: 3.3.5.1, latest stable build Transfer: 20 GB SQL .BAK file Protocol: Active FTP over tcp/20, tcp/21 Switches: Cisco Small Business 2048 Gigabit running latest 2.0.0.8 VMware: 4.1 HP: Flex-10 3.15, latest version Notes All servers are VMs. Thoughts Pretty sure the ASA is at fault since a transfer between VMs on the same network will not show a shrinking Window size. Our ASA is pretty vanilla. No major changes made to any of the settings. It has a bunch of NAT and ACLs. Wireshark Sample No. Time Source Destination Protocol Info 234905 73.916986 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131981791 Win=65535 Len=0 234906 73.917220 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234907 73.917224 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234908 73.917231 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131984551 Win=64155 Len=0 234909 73.917463 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234910 73.917467 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234911 73.917469 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234912 73.917476 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131988691 Win=60015 Len=0 234913 73.917706 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234914 73.917710 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234915 73.917715 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131991451 Win=57255 Len=0 234916 73.917949 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234917 73.917953 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234918 73.917958 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131994211 Win=54495 Len=0 234919 73.918193 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234920 73.918197 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234921 73.918202 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131996971 Win=51735 Len=0 234922 73.918435 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234923 73.918440 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234924 73.918445 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131999731 Win=48975 Len=0 234925 73.918679 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234926 73.918684 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234927 73.918689 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132002491 Win=46215 Len=0 234928 73.918922 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234929 73.918927 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234930 73.918932 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132005251 Win=43455 Len=0 234931 73.919165 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234932 73.919169 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234933 73.919174 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132008011 Win=40695 Len=0 234934 73.919408 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234935 73.919413 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234936 73.919418 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132010771 Win=37935 Len=0 234937 73.919652 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234938 73.919656 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234939 73.919661 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132013531 Win=35175 Len=0 234940 73.919895 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234941 73.919899 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234942 73.919904 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132016291 Win=32415 Len=0 234943 73.920138 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234944 73.920142 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234945 73.920147 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132019051 Win=29655 Len=0 234946 73.920381 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234947 73.920386 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234948 73.920391 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132021811 Win=26895 Len=0 234949 73.920625 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234950 73.920629 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234951 73.920632 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234952 73.920638 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132025951 Win=22755 Len=0 234953 73.920868 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234954 73.920871 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234955 73.920876 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132028711 Win=19995 Len=0 234956 73.921111 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234957 73.921115 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234958 73.921120 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132031471 Win=17235 Len=0 234959 73.921356 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234960 73.921362 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234961 73.921370 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132034231 Win=14475 Len=0 234962 73.921598 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234963 73.921606 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234964 73.921613 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132036991 Win=11715 Len=0 234965 73.921841 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234966 73.921848 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234967 73.921855 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132039751 Win=8955 Len=0 234968 73.922085 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234969 73.922092 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234970 73.922099 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132042511 Win=6195 Len=0 234971 73.922328 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234972 73.922335 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234973 73.922342 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132045271 Win=3435 Len=0 234974 73.922571 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234975 73.922579 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234976 73.922586 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132048031 Win=675 Len=0 234981 75.866453 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 675 bytes 234985 76.020168 1.1.1.1 2.2.2.2 TCP [TCP ZeroWindow] ftp-data ivecon-port [ACK] Seq=1 Ack=132048706 Win=0 Len=0 234989 76.771633 2.2.2.2 1.1.1.1 TCP [TCP ZeroWindowProbe] ivecon-port ftp-data [ACK] Seq=132048706 Ack=1 Win=65535 Len=1 234990 76.771648 1.1.1.1 2.2.2.2 TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] ftp-data ivecon-port [ACK] Seq=1 Ack=132048706 Win=0 Len=0 234997 78.279701 2.2.2.2 1.1.1.1 TCP [TCP ZeroWindowProbe] ivecon-port ftp-data [ACK] Seq=132048706 Ack=1 Win=65535 Len=1 234998 78.279714 1.1.1.1 2.2.2.2 TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] ftp-data ivecon-port [ACK] Seq=1 Ack=132048706 Win=0 Len=0

    Read the article

  • Windows Azure: General Availability of Web Sites + Mobile Services, New AutoScale + Alerts Support, No Credit Card Needed for MSDN

    - by ScottGu
    This morning we released a major set of updates to Windows Azure.  These updates included: Web Sites: General Availability Release of Windows Azure Web Sites with SLA Mobile Services: General Availability Release of Windows Azure Mobile Services with SLA Auto-Scale: New automatic scaling support for Web Sites, Cloud Services and Virtual Machines Alerts/Notifications: New email alerting support for all Compute Services (Web Sites, Mobile Services, Cloud Services, and Virtual Machines) MSDN: No more credit card requirement for sign-up All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Web Sites: General Availability Release of Windows Azure Web Sites I’m incredibly excited to announce the General Availability release of Windows Azure Web Sites. The Windows Azure Web Sites service is perfect for hosting a web presence, building customer engagement solutions, and delivering business web apps.  Today’s General Availability release means we are taking off the “preview” tag from the Free and Standard (formerly called reserved) tiers of Windows Azure Web Sites.  This means we are providing: A 99.9% monthly SLA (Service Level Agreement) for the Standard tier Microsoft Support available on a 24x7 basis (with plans that range from developer plans to enterprise Premier support) The Free tier runs in a shared compute environment and supports up to 10 web sites. While the Free tier does not come with an SLA, it works great for rapid development and testing and enables you to quickly spike out ideas at no cost. The Standard tier, which was called “Reserved” during the preview, runs using dedicated per-customer VM instances for great performance, isolation and scalability, and enables you to host up to 500 different Web sites within them.  You can easily scale your Standard instances on-demand using the Windows Azure Management Portal.  You can adjust VM instance sizes from a Small instance size (1 core, 1.75GB of RAM), up to a Medium instance size (2 core, 3.5GB of RAM), or Large instance (4 cores and 7 GB RAM).  You can choose to run between 1 and 10 Standard instances, enabling you to easily scale up your web backend to 40 cores of CPU and 70GB of RAM: Today’s release also includes general availability support for custom domain SSL certificate bindings for web sites running using the Standard tier. Customers will be able to utilize certificates they purchase for their custom domains and use either SNI or IP based SSL encryption. SNI encryption is available for all modern browsers and does not require an IP address.  SSL certificates can be used for individual sites or wild-card mapped across multiple sites (we charge extra for the use of a SSL cert – but the fee is per-cert and not per site which means you pay once for it regardless of how many sites you use it with).  Today’s release also includes the following new features: Auto-Scale support Today’s Windows Azure release adds preview support for Auto-Scaling web sites.  This enables you to setup automatic scale rules based on the activity of your instances – allowing you to automatically scale down (and save money) when they are below a CPU threshold you define, and automatically scale up quickly when traffic increases.  See below for more details. 64-bit and 32-bit mode support You can now choose to run your standard tier instances in either 32-bit or 64-bit mode (previously they only ran in 32-bit mode).  This enables you to address even more memory within individual web applications. Memory dumps Memory dumps can be very useful for diagnosing issues and debugging apps. Using a REST API, you can now get a memory dump of your sites, which you can then use for investigating issues in Visual Studio Debugger, WinDbg, and other tools. Scaling Sites Independently Prior to today’s release, all sites scaled up/down together whenever you scaled any site in a sub-region. So you may have had to keep your proof-of-concept or testing sites in a separate sub-region if you wanted to keep them in the Free tier. This will no longer be necessary.  Windows Azure Web Sites can now mix different tier levels in the same geographic sub-region. This allows you, for example, to selectively move some of your sites in the West US sub-region up to Standard tier when they require the features, scalability, and SLA of the Standard tier. Full pricing details on Windows Azure Web Sites can be found here.  Note that the “Shared Tier” of Windows Azure Web Sites remains in preview mode (and continues to have discounted preview pricing).  Mobile Services: General Availability Release of Windows Azure Mobile Services I’m incredibly excited to announce the General Availability release of Windows Azure Mobile Services.  Mobile Services is perfect for building scalable cloud back-ends for Windows 8.x, Windows Phone, Apple iOS, Android, and HTML/JavaScript applications.  Customers We’ve seen tremendous adoption of Windows Azure Mobile Services since we first previewed it last September, and more than 20,000 customers are now running mobile back-ends in production using it.  These customers range from startups like Yatterbox, to university students using Mobile Services to complete apps like Sly Fox in their spare time, to media giants like Verdens Gang finding new ways to deliver content, and telcos like TalkTalk Business delivering the up-to-the-minute information their customers require.  In today’s Build keynote, we demonstrated how TalkTalk Business is using Windows Azure Mobile Services to deliver service, outage and billing information to its customers, wherever they might be. Partners When we unveiled the source control and Custom API features I blogged about two weeks ago, we enabled a range of new scenarios, one of which is a more flexible way to work with third party services.  The following blogs, samples and tutorials from our partners cover great ways you can extend Mobile Services to help you build rich modern apps: New Relic allows developers to monitor and manage the end-to-end performance of iOS and Android applications connected to Mobile Services. SendGrid eliminates the complexity of sending email from Mobile Services, saving time and money, while providing reliable delivery to the inbox. Twilio provides a telephony infrastructure web service in the cloud that you can use with Mobile Services to integrate phone calls, text messages and IP voice communications into your mobile apps. Xamarin provides a Mobile Services add on to make it easy building cross-platform connected mobile aps. Pusher allows quickly and securely add scalable real-time messaging functionality to Mobile Services-based web and mobile apps. Visual Studio 2013 and Windows 8.1 This week during //build/ keynote, we demonstrated how Visual Studio 2013, Mobile Services and Windows 8.1 make building connected apps easier than ever. Developers building Windows 8 applications in Visual Studio can now connect them to Windows Azure Mobile Services by simply right clicking then choosing Add Connected Service. You can either create a new Mobile Service or choose existing Mobile Service in the Add Connected Service dialog. Once completed, Visual Studio adds a reference to Mobile Services SDK to your project and generates a Mobile Services client initialization snippet automatically. Add Push Notifications Push Notifications and Live Tiles are a key to building engaging experiences. Visual Studio 2013 and Mobile Services make it super easy to add push notifications to your Windows 8.1 app, by clicking Add a Push Notification item: The Add Push Notification wizard will then guide you through the registration with the Windows Store as well as connecting your app to a new or existing mobile service. Upon completion of the wizard, Visual Studio will configure your mobile service with the WNS credentials, as well as add sample logic to your client project and your mobile service that demonstrates how to send push notifications to your app. Server Explorer Integration In Visual Studio 2013 you can also now view your Mobile Services in the the Server Explorer. You can add tables, edit, and save server side scripts without ever leaving Visual Studio, as shown on the image below: Pricing With today’s general availability release we are announcing that we will be offering Mobile Services in three tiers – Free, Standard, and Premium.  Each tier is metered using a simple pricing model based on the # of API calls (bandwidth is included at no extra charge), and the Standard and Premium tiers are backed by 99.9% monthly SLAs.  You can elastically scale up or down the number of instances you have of each tier to increase the # of API requests your service can support – allowing you to efficiently scale as your business grows. The following table summarizes the new pricing model (full pricing details here):   You can find the full details of the new pricing model here. Build Conference Talks The //BUILD/ conference will be packed with sessions covering every aspect of developing connected applications with Mobile Services. The best part is that, even if you can’t be with us in San Francisco, every session is being streamed live. Be sure not to miss these talks: Mobile Services – Soup to Nuts — Josh Twist Building Cross-Platform Apps with Windows Azure Mobile Services — Chris Risner Connected Windows Phone Apps made Easy with Mobile Services — Yavor Georgiev Build Connected Windows 8.1 Apps with Mobile Services — Nick Harris Who’s that user? Identity in Mobile Apps — Dinesh Kulkarni Building REST Services with JavaScript — Nathan Totten Going Live and Beyond with Windows Azure Mobile Services — Kirill Gavrylyuk , Paul Batum Protips for Windows Azure Mobile Services — Chris Risner AutoScale: Dynamically scale up/down your app based on real-world usage One of the key benefits of Windows Azure is that you can dynamically scale your application in response to changing demand. In the past, though, you have had to either manually change the scale of your application, or use additional tooling (such as WASABi or MetricsHub) to automatically scale your application. Today, we’re announcing that AutoScale will be built-into Windows Azure directly.  With today’s release it is now enabled for Cloud Services, Virtual Machines and Web Sites (Mobile Services support will come soon). Auto-scale enables you to configure Windows Azure to automatically scale your application dynamically on your behalf (without any manual intervention) so you can achieve the ideal performance and cost balance. Once configured it will regularly adjust the number of instances running in response to the load in your application. Currently, we support two different load metrics: CPU percentage Storage queue depth (Cloud Services and Virtual Machines only) We’ll enable automatic scaling on even more scale metrics in future updates. When to use Auto-Scale The following are good criteria for services/apps that will benefit from the use of auto-scale: The service/app can scale horizontally (e.g. it can be duplicated to multiple instances) The service/app load changes over time If your app meets these criteria, then you should look to leverage auto-scale. How to Enable Auto-Scale To enable auto-scale, simply navigate to the Scale tab in the Windows Azure Management Portal for the app/service you wish to enable.  Within the scale tab turn the Auto-Scale setting on to either CPU or Queue (for Cloud Services and VMs) to enable Auto-Scale.  Then change the instance count and target CPU settings to configure the Auto-Scale ranges you want to maintain. The image below demonstrates how to enable Auto-Scale on a Windows Azure Web-Site.  I’ve configured the web-site so that it will run using between 1 and 5 VM instances.  The exact # used will depend on the aggregate CPU of the VMs using the 40-70% range I’ve configured below.  If the aggregate CPU goes above 70%, then Windows Azure will automatically add new VMs to the pool (up to the maximum of 5 instances I’ve configured it to use).  If the aggregate CPU drops below 40% then Windows Azure will automatically start shutting down VMs to save me money: Once you’ve turned auto-scale on, you can return to the Scale tab at any point and select Off to manually set the number of instances. Using the Auto-Scale Preview With today’s update you can now, in just a few minutes, have Windows Azure automatically adjust the number of instances you have running  in your apps to keep your service performant at an even better cost. Auto-scale is being released today as a preview feature, and will be free until General Availability. During preview, each subscription is limited to 10 separate auto-scale rules across all of the resources they have (Web sites, Cloud services or Virtual Machines). If you hit the 10 limit, you can disable auto-scale for any resource to enable it for another. Alerts and Notifications Starting today we are now providing the ability to configure threshold based alerts on monitoring metrics. This feature is available for compute services (cloud services, VM, websites and mobiles services). Alerts provide you the ability to get proactively notified of active or impending issues within your application.  You can define alert rules for: Virtual machine monitoring metrics that are collected from the host operating system (CPU percentage, network in/out, disk read bytes/sec and disk write bytes/sec) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. Cloud service monitoring metrics that are collected from the host operating system (same as VM), monitoring metrics from the guest VM (from performance counters within the VM) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. For Web Sites and Mobile Services, alerting rules can be configured on monitoring metrics from monitoring endpoint urls (response time and uptime) that you have configured. Creating Alert Rules You can add an alert rule for a monitoring metric by navigating to the Setting -> Alerts tab in the Windows Azure Management Portal. Click on the Add Rule button to create an alert rule. Give the alert rule a name and optionally add a description. Then pick the service which you want to define the alert rule on: The next step in the alert creation wizard will then filter the monitoring metrics based on the service you selected:   Once created the rule will show up in your alerts list within the settings tab: The rule above is defined as “not activated” since it hasn’t tripped over the CPU threshold we set.  If the CPU on the above machine goes over the limit, though, I’ll get an email notifying me from an Windows Azure Alerts email address ([email protected]). And when I log into the portal and revisit the alerts tab I’ll see it highlighted in red.  Clicking it will then enable me to see what is causing it to fail, as well as view the history of when it has happened in the past. Alert Notifications With today’s initial preview you can now easily create alerting rules based on monitoring metrics and get notified on active or impending issues within your application that require attention. During preview, each subscription is limited to 10 alert rules across all of the services that support alert rules. No More Credit Card Requirement for MSDN Subscribers Earlier this month (during TechEd 2013), Windows Azure announced that MSDN users will get Windows Azure Credits every month that they can use for any Windows Azure services they want. You can read details about this in my previous Dev/Test blog post. Today we are making further updates to enable an easier Windows Azure signup for MSDN users. MSDN users will now not be required to provide payment information (e.g. no credit card) during sign-up, so long as they use the service within the included monetary credit for the billing period. For usage beyond the monetary credit, they can enable overages by providing the payment information and remove the spending limit. This enables a super easy, one page sign-up experience for MSDN users.  Simply sign-up for your Windows Azure trial using the same Microsoft ID that you use to manage your MSDN account, then complete the one page sign-up form below and you will be able to spend your free monthly MSDN credits (up to $150 each month) on any Windows Azure resource for dev/test:   This makes it trivially easy for every MDSN customer to start using Windows Azure today.  If you haven’t signed up yet, I definitely recommend checking it out. Summary Today’s release includes a ton of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • CodePlex Daily Summary for Friday, December 10, 2010

    CodePlex Daily Summary for Friday, December 10, 2010Popular ReleasesFree Silverlight & WPF Chart Control - Visifire: Visifire Silverlight, WPF Charts v3.6.5 Released: Hi, Today we are releasing final version of Visifire, v3.6.5 with the following new feature: * New property AutoFitToPlotArea has been introduced in DataSeries. AutoFitToPlotArea will bring bubbles inside the PlotArea in order to avoid clipping of bubbles in bubble chart. You can visit Visifire documentation to know more. http://www.visifire.com/visifirechartsdocumentation.php Also this release includes few bug fixes: * Chart threw exception while adding new Axis in Chart using Vi...PHPExcel: PHPExcel 1.7.5 Production: DonationsDonate via PayPal via PayPal. If you want to, we can also add your name / company on our Donation Acknowledgements page. PEAR channelWe now also have a full PEAR channel! Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPExcel Or if you've already installed PHPExcel before: pear upgrade pearplex/PHPExcel The official page can be found at http://pearplex.net. Want to contribute?Please refer the Contribute page.UserVoice Helper for WebMatrix: UserVoice Helper v0.9: This version will work with ASP.NET WebPages and ASP.NET MVC ApplicationsDNN Simple Article: DNNSimpleArticle Module V00.00.03: The initial release of the DNNSimpleArticle module (labelled V00.00.03) There are C# and VB versions of this module for this initial release. No promises that going forward there will be packages for both languages provided for future releases. This module provides the following functionality Create and display articles Display a paged list of articles Articles get created as DNN ContentItems Categorization provided through DNN Taxonomy SEO functionality for article display providi...UOB & ME: UOB_ME 2.5: latest versionCouchDB.NET: CouchDB.NET 0.1: CouchDB.NET ------- Libraries and providers to use CouchDB features from .NET This distribution includes the following projects: - MachineKeyGenerator: Command line tool to generate a machine key string for use in App.Config and Web.Config files. - CouchDB.NET: Library to facilitate the use of CouchDB features. It uses Hadi Hariri's EasyHttp library to communicate with the CouchDB server. More info at: https://github.com/hhariri/EasyHttp - CouchDb.ASP.NET: ASP.NET Membership Provider and ASP...AutoLoL: AutoLoL v1.4.3: AutoLoL now supports importing the build pages from Mobafire.com as well! Just insert the url to the build and voila. (For example: http://www.mobafire.com/league-of-legends/build/unforgivens-guide-how-to-build-a-successful-mordekaiser-24061) Stable release of AutoChat (It is still recommended to use with caution and to read the documentation) It is now possible to associate *.lolm files with AutoLoL to quickly open them The selected spells are now displayed in the masteries tab for qu...SubtitleTools: SubtitleTools 1.2: - Added auto insertion of RLE (RIGHT-TO-LEFT EMBEDDING) Unicode character for the RTL languages. - Fixed delete rows issue.PHP Manager for IIS: PHP Manager 1.1 for IIS 7: This is a final stable release of PHP Manager 1.1 for IIS 7. This is a minor incremental release that contains all the functionality available in 53121 plus additional features listed below: Improved detection logic for existing PHP installations. Now PHP Manager detects the location to php.ini file in accordance to the PHP specifications Configuring date.timezone. PHP Manager can automatically set the date.timezone directive which is required to be set starting from PHP 5.3 Ability to ...Algorithmia: Algorithmia 1.1: Algorithmia v1.1, released on December 8th, 2010.SuperSocket, an extensible socket application framework: SuperSocket 1.0 SP1: Fixed bugs: fixed a potential bug that the running state hadn't been updated after socket server stopped fixed a synchronization issue when clearing timeout session fixed a bug in ArraySegmentList fixed a bug on getting configuration valueCslaGenFork: CslaGenFork 4.0 CTP 2: The version is 4.0.1 CTP2 and was released 2010 December 7 and includes the following files: CslaGenFork 4.0.1-2010-12-07 Setup.msi Templates-2010-10-07.zip For getting started instructions, refer to How to section. Overview of the changes Since CTP1 there were 53 work items closed (28 features, 24 issues and 1 task). During this 60 days a lot of work has been done on several areas. First the stereotypes: EditableRoot is OK EditableChild is OK EditableRootCollection is OK Editable...My Web Pages Starter Kit: 1.3.1 Production Release (Security HOTFIX): Due to a critical security issue, it's strongly advised to update the My Web Pages Starter Kit to this version. Possible attackers could misuse the image upload to transmit any type of file to the website. If you already have a running version of My Web Pages Starter Kit 1.3.0, you can just replace the ftb.imagegallery.aspx file in the root directory with the one attached to this release.EnhSim: EnhSim 2.2.0 ALPHA: 2.2.0 ALPHAThis release adds in the changes for 4.03a. at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Updated En...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: popup WhiteSpaceFilterAttribute tested on mozilla, safari, chrome, opera, ie 9b/8/7/6nopCommerce. ASP.NET open source shopping cart: nopCommerce 1.90: To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).myCollections: Version 1.2: New in version 1.2: Big performance improvement. New Design (Added Outlook style View, New detail view, New Groub By...) Added Sort by Media Added Manage Movie Studio Zoom preference is now saved. Media name are now editable. Added Portuguese version You can now Hide details panel Add support for FLAC tags You can now imports books from BibTex Xml file BugFixingmytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.49.0 beta: mytrip.mvc 1.0.49.0 beta web Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) mytrip.mvc 1.0.49.0 beta src System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net 6.3.4, MVC3 RC WARNING For run and debug mytrip.mvc 1.0.49.0 beta src download and ...Menu and Context Menu for Silverlight 4.0: Silverlight Menu and Context Menu v2.3 Beta: - Added keyboard navigation support with access keys - Shortcuts like Ctrl-Alt-A are now supported(where the browser permits it) - The PopupMenuSeparator is now completely based on the PopupMenuItem class - Moved item manipulation code to a partial class in PopupMenuItemsControl.cs - Moved menu management and keyboard navigation code to the new PopupMenuManager class - Simplified the layout by removing the RootGrid element(all content is now placed in OverlayCanvas and is accessed by the new ...MiniTwitter: 1.62: MiniTwitter 1.62 ???? ?? ??????????????????????????????????????? 140 ?????????????????????????? ???????????????????????????????? ?? ??????????????????????????????????New ProjectsAccountingGuid: for testing onlyChinese Nag Screen: This is a simple but effective program for learning to recognize Mandarin characters. The application sits in the system tray and displays a character random through your day. You can only get rid of it by typing in the pinyin.CouchDB.NET: .NET libraries to use CouchDB from .NET. Included are Membership and Roles provider so that you may use CouchDB as your integrated DB backend on your ASP.NET projects. Please see the readme.txt file for instructions.DataSetMapper: The idea behind DataSetMapper is to provide support for the automatic mapping of legacy DataSet based structures to proper domain objects. In essence the aim is to create the Mapping aspect of an ORM without the persistence concerns.EasyXnaAudio: EasyXnaAudio is a simple component for use in XNA Game Studio 3.1/4.0 projects that provides an easy interface to load, play, and manage songs and sounds in your game.FixMailboxSD - Exchange Mailbox Security Descriptor Canonicalizer: This is a small utility to fix mailbox security descriptors in Microsoft Exchange that have become non-canonical. It must be run on a machine with Exchange System Manager for Exchange 2003 installed, but it will work against mailboxes on 2003 or 2007 (not 2010).GearSynth Plugin: a plugin for graphsynth that makes gear trainsGroceryList: TBD with first versionIBMS Suite Build on the Associate Platform: A new way of approaching Information Systems. From the UI, users of the IS will be able to build and manipulate the IS to whatever way fits their needs. We have simplified development, removed the chasm between management and IT and give the power of simplification to the user!Ivy Nasha Framework: A PHP FrameworkjQuery helpers for ASP.NET and ASP.NET MVC: jQuery helpers makes it easier for ASP.NET developers to build jQuery scripts. It's developed in C#. JSTest.NET: JSTest.NET enabled JavaScript unit tests to be run directly in the test framework of your choice (MSTest, NUnit, xUnit, etc) and all without the need for a web browser. JSTest.NET utilizes the Windows Script Host (CScript) to run fast, fully debuggable JavaScript unit tests!Multicore Task Framework: MTF is a visual tool to simplify building robust component based .NET applications. MTF is designed to make full use of the power of multi-core processors.Nazha Script On DLR: NazhaPascalESE - a Delphi/Pascal class library for Microsoft ESENT database API: This pascal class library, primarily written for Delphi's Object Pascal, provides a lightweight and easy-to-use wrapper around the ESENT API. Perpetuum Hangar: A Character planner for the online game "Perpetuum"Projeto Exemplo: Projeto exemplo para a atividade 3 da disciplina.PSiteCode: PSiteCode Manager rScript Engine: rScript scripting engine is a managed script engine wrote in C# that supports Visual Basic and C# syntax based scripts. It provides Type's for dynamically getting and setting properties, invoking methods and run-time compilation of scripts.SharePoint 2010 User Profile WebPart: This webpart shows all user profile properties and values of the properties for a particular user profile. The results are shown in a table containing the display and technical name together with the user value.SHC: shriSHMTools: SHMTools is set of compatible software tools (mostly Matlab based) for structural health monitoring (SHM) research. This includes algorithms for system design, modeling, data acquisition, feature extraction, classification, and prognosis.SwapWin: SwapWin is a tiny and handy tool which swaps windows on different screens. Developed in C# and .NET 3.5.Teachers Diary: Teachers diary is application realizing electronic teacher's notepad with student marks. Current localization of the application is in czech language only.VkApp: Vk app for downloadingWebSpirit: A lightweighted web server implemented by C# which supports sufficient extendible feature. By zjuWPF & MEF Studio: WPF & MEF Studio

    Read the article

  • 12.04 upgrade broke grub? (not wubi related)

    - by kaare
    I just updated from 11.10 to 12.04, with no major problems (it took a while to get past a request to restart ssh, mysql and some other services, but I did no fiddling by myself, everything was done by the installer). However, after restarting, grub can't do anything. Picking the new linux installation (first entry), I just get error: no such partition error: no such partition error: no such partition and picking the recovery-version just gives 5 lines instead of 3. I have windows 7 installed on a different drive, and can run it by booting from that drive instead. Picking it from the grub menu gives the same error as above (can't remember how many lines, though). I'll be honest and say that I don't remember if win 7 could be booted from grub before the update, though. In short, nothing on the grub menu works. any solutions? The grub menu changed appearance - before it was on a purple background, small letters, now it's white-on-black, big letters, looking very basic. The original installation was from a usb-drive, and I hadn't heard about wubi until I started googling this problem, so I doubt there's any connection. I really hope there are some grub-savvy people out there :) EDIT: ok. so, I made a bootable usb, and am running from that right now. when I ran the bootinfoscript, it warned me that "gawk" could not be found, using "busybox awk" instead. This may lead to unreliable results. just so you know. The contents of RESULTS.txt are: Boot Info Script 0.61 [1 April 2012] ============================= Boot Info Summary: =============================== => Windows is installed in the MBR of /dev/sda. => Grub2 (v1.99) is installed in the MBR of /dev/sdb and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos3)/boot/grub on this drive. => Syslinux MBR (4.04 and higher) is installed in the MBR of /dev/sdc. sda1: __________________________________________ File system: vfat Boot sector type: Dell Utility: FAT16 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /DELLBIO.BIN /DELLRMK.BIN /COMMAND.COM sda2: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sda3: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /bootmgr /Boot/BCD /Windows/System32/winload.exe sda4: __________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________ File system: vfat Boot sector type: Windows 7: FAT32 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows XP Boot files: /boot.ini /bootmgr /ntldr /NTDETECT.COM sdb1: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sdb2: __________________________________________ File system: swap Boot sector type: - Boot sector info: sdb3: __________________________________________ File system: ext4 Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of sdb3 and looks at sector 375893584 of the same hard drive for core.img. core.img is at this location and looks for (,msdos3)/boot/grub on this drive. Operating System: Ubuntu 12.04 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdb4: __________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Boot files: sdc1: __________________________________________ File system: ntfs Boot sector type: SYSLINUX 4.06 4.06-pre1 Boot sector info: Syslinux looks at sector 4649656 of /dev/sdc1 for its second stage. SYSLINUX is installed in the directory. The integrity check of the ADV area failed. No errors found in the Boot Parameter Block. Operating System: Boot files: /boot/grub/grub.cfg /syslinux/syslinux.cfg /ldlinux.sys ============================ Drive/Partition Info: ============================= Drive: sda _______________________________________ Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 63 240,974 240,912 de Dell Utility /dev/sda2 241,664 21,213,183 20,971,520 7 NTFS / exFAT / HPFS /dev/sda3 * 21,213,184 483,151,863 461,938,680 7 NTFS / exFAT / HPFS /dev/sda4 483,151,872 488,394,751 5,242,880 f W95 Extended (LBA) /dev/sda5 483,153,920 488,394,751 5,240,832 dd Dell Media Direct Drive: sdb _______________________________________ Disk /dev/sdb: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdb1 63 345,886,749 345,886,687 7 NTFS / exFAT / HPFS /dev/sdb2 345,888,768 361,510,911 15,622,144 82 Linux swap / Solaris /dev/sdb3 * 361,510,912 390,807,786 29,296,875 83 Linux /dev/sdb4 390,809,600 488,394,751 97,585,152 83 Linux Drive: sdc _______________________________________ Disk /dev/sdc: 8015 MB, 8015282176 bytes 255 heads, 63 sectors/track, 974 cylinders, total 15654848 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdc1 * 2,048 15,652,863 15,650,816 7 NTFS / exFAT / HPFS "blkid" output: ____________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 07D8-0411 vfat DellUtility /dev/sda2 E2765BBC765B9061 ntfs RECOVERY /dev/sda3 98DC5E54DC5E2D2E ntfs OS /dev/sda5 7061-9DF5 vfat MEDIADIRECT /dev/sdb1 01CBBB4C3374C3B0 ntfs Data1 /dev/sdb2 1ca45f3f-f888-43d1-8137-02699597189a swap /dev/sdb3 6bc1b599-ad4b-403c-a155-a5bc81211f5e ext4 /dev/sdb4 58e2b257-8608-4b11-b20b-dc162bb80b62 ext4 /dev/sdc1 0C02B64402B63316 ntfs PENDRIVE ================================ Mount points: ================================= Device Mount_Point Type Options /dev/loop0 /rofs squashfs (ro,noatime) /dev/sdb4 /media/58e2b257-8608-4b11-b20b-dc162bb80b62 ext4 (rw,nosuid,nodev,uhelper=udisks) /dev/sdc1 /cdrom fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,blksize=4096) ================================ sda5/boot.ini: ================================ [boot loader] timeout=0 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Embedded" /fastdetect /KERNEL=NTOSBOOT.EXE /maxmem=1024 =========================== sdb3/boot/grub/grub.cfg: =========================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="$1" if [ "$1" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 3.2.0-24-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e linux /boot/vmlinuz-3.2.0-24-generic root=UUID=6bc1b599-ad4b-403c-a155-a5bc81211f5e ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-24-generic } menuentry 'Ubuntu, with Linux 3.2.0-24-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e echo 'Loading Linux 3.2.0-24-generic ...' linux /boot/vmlinuz-3.2.0-24-generic root=UUID=6bc1b599-ad4b-403c-a155-a5bc81211f5e ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-24-generic } submenu "Previous Linux versions" { menuentry 'Ubuntu, with Linux 3.0.0-19-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e linux /boot/vmlinuz-3.0.0-19-generic root=UUID=6bc1b599-ad4b-403c-a155-a5bc81211f5e ro quiet splash $vt_handoff initrd /boot/initrd.img-3.0.0-19-generic } menuentry 'Ubuntu, with Linux 3.0.0-19-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e echo 'Loading Linux 3.0.0-19-generic ...' linux /boot/vmlinuz-3.0.0-19-generic root=UUID=6bc1b599-ad4b-403c-a155-a5bc81211f5e ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.0.0-19-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sda3)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos3)' search --no-floppy --fs-uuid --set=root 98DC5E54DC5E2D2E chainloader +1 } menuentry "Microsoft Windows XP Embedded (on /dev/sda5)" --class windows --class os { insmod part_msdos insmod fat set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root 7061-9DF5 drivemap -s (hd0) ${root} chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### =============================== sdb3/etc/fstab: ================================ # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdb3 during installation UUID=6bc1b599-ad4b-403c-a155-a5bc81211f5e / ext4 errors=remount-ro 0 1 # /home was on /dev/sdb4 during installation UUID=58e2b257-8608-4b11-b20b-dc162bb80b62 /home ext4 defaults,user_xattr 0 2 # swap was on /dev/sdb2 during installation UUID=1ca45f3f-f888-43d1-8137-02699597189a none swap sw 0 0 =================== sdb3: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) = boot/grub/core.img 1 = boot/grub/grub.cfg 1 = boot/initrd.img-3.0.0-19-generic 2 = boot/initrd.img-3.2.0-24-generic 2 = boot/vmlinuz-3.0.0-19-generic 2 = boot/vmlinuz-3.2.0-24-generic 1 = vmlinuz 1 = vmlinuz.old 2 =========================== sdc1/boot/grub/grub.cfg: =========================== if loadfont /boot/grub/font.pf2 ; then set gfxmode=auto insmod efi_gop insmod efi_uga insmod gfxterm terminal_output gfxterm fi set menu_color_normal=white/black set menu_color_highlight=black/light-gray menuentry "Try Ubuntu without installing" { set gfxpayload=keep linux /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper quiet splash -- initrd /casper/initrd.lz } menuentry "Install Ubuntu" { set gfxpayload=keep linux /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper only-ubiquity quiet splash -- initrd /casper/initrd.lz } menuentry "Check disc for defects" { set gfxpayload=keep linux /casper/vmlinuz boot=casper integrity-check quiet splash -- initrd /casper/initrd.lz } ========================= sdc1/syslinux/syslinux.cfg: ========================== # D-I config version 2.0 include menu.cfg default vesamenu.c32 prompt 0 timeout 50 # If you would like to use the new menu and be presented with the option to install or run from USB at startup, remove # from the following line. This line was commented out (by request of many) to allow the old menu to be presented and to enable booting straight into the Live Environment! # ui gfxboot bootlogo =================== sdc1: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) ?? = ?? boot/grub/grub.cfg 0 ================= sdc1: Location of files loaded by Syslinux: ================== GiB - GB File Fragment(s) ?? = ?? ldlinux.sys 1 ?? = ?? syslinux/chain.c32 1 ?? = ?? syslinux/gfxboot.c32 1 ?? = ?? syslinux/syslinux.cfg 0 ?? = ?? syslinux/vesamenu.c32 1 ============== sdc1: Version of COM32(R) files used by Syslinux: =============== syslinux/chain.c32 : COM32R module (v4.xx) syslinux/gfxboot.c32 : COM32R module (v4.xx) syslinux/vesamenu.c32 : COM32R module (v4.xx) =============================== StdErr Messages: =============================== xz: (stdin): Compressed data is corrupt xz: (stdin): Compressed data is corrupt awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in ./bootinfoscript: line 1646: [: 2.73495e+09: integer expression expected

    Read the article

  • ASP.NET MVC 3: Layouts and Sections with Razor

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: Introducing Razor (July 2nd) New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (Dec 15th) Implicit and Explicit code nuggets with Razor (Dec 16th) Layouts and Sections with Razor (Today) In today’s post I’m going to go into more details about how Layout pages work with Razor.  In particular, I’m going to cover how you can have multiple, non-contiguous, replaceable “sections” within a layout file – and enable views based on layouts to optionally “fill in” these different sections at runtime.  The Razor syntax for doing this is clean and concise. I’ll also show how you can dynamically check at runtime whether a particular layout section has been defined, and how you can provide alternate content (or even an alternate layout) in the event that a section isn’t specified within a view template.  This provides a powerful and easy way to customize the UI of your site and make it clean and DRY from an implementation perspective. What are Layouts? You typically want to maintain a consistent look and feel across all of the pages within your web-site/application.  ASP.NET 2.0 introduced the concept of “master pages” which helps enable this when using .aspx based pages or templates.  Razor also supports this concept with a feature called “layouts” – which allow you to define a common site template, and then inherit its look and feel across all the views/pages on your site. I previously discussed the basics of how layout files work with Razor in my ASP.NET MVC 3: Layouts with Razor blog post.  Today’s post will go deeper and discuss how you can define multiple, non-contiguous, replaceable regions within a layout file that you can then optionally “fill in” at runtime. Site Layout Scenario Let’s look at how we can implement a common site layout scenario with ASP.NET MVC 3 and Razor.  Specifically, we’ll implement some site UI where we have a common header and footer on all of our pages.  We’ll also add a “sidebar” section to the right of our common site layout.  On some pages we’ll customize the SideBar to contain content specific to the page it is included on: And on other pages (that do not have custom sidebar content) we will fall back and provide some “default content” to the sidebar: We’ll use ASP.NET MVC 3 and Razor to enable this customization in a nice, clean way.  Below are some step-by-step tutorial instructions on how to build the above site with ASP.NET MVC 3 and Razor. Part 1: Create a New Project with a Layout for the “Body” section We’ll begin by using the “File->New Project” menu command within Visual Studio to create a new ASP.NET MVC 3 Project.  We’ll create the new project using the “Empty” template option: This will create a new project that has no default controllers in it: Creating a HomeController We will then right-click on the “Controllers” folder of our newly created project and choose the “Add->Controller” context menu command.  This will bring up the “Add Controller” dialog: We’ll name the new controller we create “HomeController”.  When we click the “Add” button Visual Studio will add a HomeController class to our project with a default “Index” action method that returns a view: We won’t need to write any Controller logic to implement this sample – so we’ll leave the default code as-is.  Creating a View Template Our next step will be to implement the view template associated with the HomeController’s Index action method.  To implement the view template, we will right-click within the “HomeController.Index()” method and select the “Add View” command to create a view template for our home page: This will bring up the “Add View” dialog within Visual Studio.  We do not need to change any of the default settings within the above dialog (the name of the template was auto-populated to Index because we invoked the “Add View” context menu command within the Index method).  When we click the “Add” Button within the dialog, a Razor-based “Index.cshtml” view template will be added to the \Views\Home\ folder within our project.  Let’s add some simple default static content to it: Notice above how we don’t have an <html> or <body> section defined within our view template.  This is because we are going to rely on a layout template to supply these elements and use it to define the common site layout and structure for our site (ensuring that it is consistent across all pages and URLs within the site).  Customizing our Layout File Let’s open and customize the default “_Layout.cshtml” file that was automatically added to the \Views\Shared folder when we created our new project: The default layout file (shown above) is pretty basic and simply outputs a title (if specified in either the Controller or the View template) and adds links to a stylesheet and jQuery.  The call to “RenderBody()” indicates where the main body content of our Index.cshtml file will merged into the output sent back to the browser. Let’s modify the Layout template to add a common header, footer and sidebar to the site: We’ll then edit the “Site.css” file within the \Content folder of our project and add 4 CSS rules to it: And now when we run the project and browse to the home “/” URL of our project we’ll see a page like below: Notice how the content of the HomeController’s Index view template and the site’s Shared Layout template have been merged together into a single HTML response.  Below is what the HTML sent back from the server looks like: Part 2: Adding a “SideBar” Section Our site so far has a layout template that has only one “section” in it – what we call the main “body” section of the response.  Razor also supports the ability to add additional "named sections” to layout templates as well.  These sections can be defined anywhere in the layout file (including within the <head> section of the HTML), and allow you to output dynamic content to multiple, non-contiguous, regions of the final response. Defining the “SideBar” section in our Layout Let’s update our Layout template to define an additional “SideBar” section of content that will be rendered within the <div id=”sidebar”> region of our HTML.  We can do this by calling the RenderSection(string sectionName, bool required) helper method within our Layout.cshtml file like below:   The first parameter to the “RenderSection()” helper method specifies the name of the section we want to render at that location in the layout template.  The second parameter is optional, and allows us to define whether the section we are rendering is required or not.  If a section is “required”, then Razor will throw an error at runtime if that section is not implemented within a view template that is based on the layout file (which can make it easier to track down content errors).  If a section is not required, then its presence within a view template is optional, and the above RenderSection() code will render nothing at runtime if it isn’t defined. Now that we’ve made the above change to our layout file, let’s hit refresh in our browser and see what our Home page now looks like: Notice how we currently have no content within our SideBar <div> – that is because the Index.cshtml view template doesn’t implement our new “SideBar” section yet. Implementing the “SideBar” Section in our View Template Let’s change our home-page so that it has a SideBar section that outputs some custom content.  We can do that by opening up the Index.cshtml view template, and by adding a new “SiderBar” section to it.  We’ll do this using Razor’s @section SectionName { } syntax: We could have put our SideBar @section declaration anywhere within the view template.  I think it looks cleaner when defined at the top or bottom of the file – but that is simply personal preference.  You can include any content or code you want within @section declarations.  Notice above how I have a C# code nugget that outputs the current time at the bottom of the SideBar section.  I could have also written code that used ASP.NET MVC’s HTML/AJAX helper methods and/or accessed any strongly-typed model objects passed to the Index.cshtml view template. Now that we’ve made the above template changes, when we hit refresh in our browser again we’ll see that our SideBar content – that is specific to the Home Page of our site – is now included in the page response sent back from the server: The SideBar section content has been merged into the proper location of the HTML response : Part 3: Conditionally Detecting if a Layout Section Has Been Implemented Razor provides the ability for you to conditionally check (from within a layout file) whether a section has been defined within a view template, and enables you to output an alternative response in the event that the section has not been defined.  This provides a convenient way to specify default UI for optional layout sections.  Let’s modify our Layout file to take advantage of this capability.  Below we are conditionally checking whether the “SideBar” section has been defined without the view template being rendered (using the IsSectionDefined() method), and if so we render the section.  If the section has not been defined, then we now instead render some default content for the SideBar:  Note: You want to make sure you prefix calls to the RenderSection() helper method with a @ character – which will tell Razor to execute the HelperResult it returns and merge in the section content in the appropriate place of the output.  Notice how we wrote @RenderSection(“SideBar”) above instead of just RenderSection(“SideBar”).  Otherwise you’ll get an error. Above we are simply rendering an inline static string (<p>Default SideBar Content</p>) if the section is not defined.  A real-world site would more likely refactor this default content to be stored within a separate partial template (which we’d render using the Html.RenderPartial() helper method within the else block) or alternatively use the Html.Action() helper method within the else block to encapsulate both the logic and rendering of the default sidebar. When we hit refresh on our home-page, we will still see the same custom SideBar content we had before.  This is because we implemented the SideBar section within our Index.cshtml view template (and so our Layout rendered it): Let’s now implement a “/Home/About” URL for our site by adding a new “About” action method to our HomeController: The About() action method above simply renders a view back to the client when invoked.  We can implement the corresponding view template for this action by right-clicking within the “About()” method and using the “Add View” menu command (like before) to create a new About.cshtml view template.  We’ll implement the About.cshtml view template like below. Notice that we are not defining a “SideBar” section within it: When we browse the /Home/About URL we’ll see the content we supplied above in the main body section of our response, and the default SideBar content will rendered: The layout file determined at runtime that a custom SideBar section wasn’t present in the About.cshtml view template, and instead rendered the default sidebar content. One Last Tweak… Let’s suppose that at a later point we decide that instead of rendering default side-bar content, we just want to hide the side-bar entirely from pages that don’t have any custom sidebar content defined.  We could implement this change simply by making a small modification to our layout so that the sidebar content (and its surrounding HTML chrome) is only rendered if the SideBar section is defined.  The code to do this is below: Razor is flexible enough so that we can make changes like this and not have to modify any of our view templates (nor make change any Controller logic changes) to accommodate this.  We can instead make just this one modification to our Layout file and the rest happens cleanly.  This type of flexibility makes Razor incredibly powerful and productive. Summary Razor’s layout capability enables you to define a common site template, and then inherit its look and feel across all the views/pages on your site. Razor enables you to define multiple, non-contiguous, “sections” within layout templates that can be “filled-in” by view templates.  The @section {} syntax for doing this is clean and concise.  Razor also supports the ability to dynamically check at runtime whether a particular section has been defined, and to provide alternate content (or even an alternate layout) in the event that it isn’t specified.  This provides a powerful and easy way to customize the UI of your site - and make it clean and DRY from an implementation perspective. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • CodePlex Daily Summary for Friday, April 02, 2010

    CodePlex Daily Summary for Friday, April 02, 2010New ProjectsAE.Remoting: An alternative means of remoting for .NET to allow for intuitive usage and easy implementation into existing code.animated-smoke-modeling: This is an implementation or a demo of our method to model animated smokes. ASP.NET Google Maps: Extensible and easy to use, this is ASP.NET Google Maps Control. Drag & Drop and is ready to go. You can configure map style, add a PushPin using t...CartPatches able to see: CartPatches able to see youCodemix Cms: Codemix CmsDo the right thing - The Simple TodoManager: A simple Todo Manager which lets you focus on your daily most important tasks/todos. So do the right thing.....at your home, in your office, in you...Fast Console: Fast Console is a simple xml programming language. This may be a really good starting language as there are printing, variables and as soon as poss...Graphing Calculator in Silverlight: This was initially an effort to port a WPF graphing calculator written by Bob Brown (Microsoft) into Silverlight but soon after it became necessary...InformationVSTS: This application allows you to have all informations on VSTS installed. It also lets you know the server of BUILD and project.La Ranisima: La Ranisima is an open source "Space Invaders" alike game totally written in DHTML (JavaScript, CSS and HTML) that uses keyboard. This cross-platfo...La villa del seis: La villa del seis is a multiplatform point-and-click graphical adventure. Also, you can play it like a text adventure (interactive fiction) on a te...LParse: LParse is a monadic parser combinator library, similar to Haskell’s Parsec. It allows you create parsers on C# language. All parsers are first-clas...Manage Recents File/Project VS2005/2008: Clear Recents Files and Projects, and Clear Broken Links of Recents Files and Projects for VS2005 and VS2008. Developed in Visual Studio 2008 SP...Mavention: Mavention makes SharePoint work for you.MixMail: MixMailMixScrum: mixScrumMixTemplate: MixTemplate.NepomucenoBR Regex Learning Tool: This is a simple program designed to help people to study regular expressions.Pruebas: Pruebas is an open source game mix of text adventure and RPG written in Microsoft QBasic (under MS-DOS 6.22) that uses keyboard. Runs natively unde...Python Design by Contract: Simple to use invariants, pre- and postconditions which use some of the new metaprogramming features in Python 3.Rubik Cube's 3D Silverlight 3.0 Animated Solution: Rubik Cube's Silverlight 3.0 Animated Solution is a 3D presentation of Rubik Cube in range of up to 7x7x7 size with full functionality and an anima...Seminarka: Seminarka - ko treba znat šta je zna!SENAC 2010 - Projeto Integrador 2 (Material de Apoio): Material utilizado para apoiar os alunos da disciplina de Projeto integrador 2. O tema são sistemas web utilizando ASP.NET, com C# e banco de da...SENAC CG2010: Contém código apresentado em sala de aula para a disciplina de CG, 5ºBSI NoturnoSistema de facturación: Sistema de facturación desarrollado en C# para la clase de programación 3.SmartFront - WPF and Silverlight Toolkit: SmartFront is a framework piece which allow to quickly building Smart Client application in WPF and in Silverlight. This framework uses existing s...Solar 1: This is the ASP.NET MVC engine based on Oxite and used for 32planets.net.TemporalSQL: SQL Patterns - tables, queries, and functions - to design a temporal database. TFunkOrderSystem: The Funkalistic Blueprint and Items order management systemTribe.Cache: Tribe.Cache is a simple dictionary cache (persistent dictionary) written in C# which is easy to implement and use.tstProject: Testing ProjectUDC indexes parser: UDC (Universal Decimal Classification) indexes parserWebAssert: A test assertion library to assist in writing automated tests against websites. Allows for assertion of HTML validity, etc. Initially has support f...Words Via Subtitle: Words Via Subtitle makes it easier for English Learners to learn new words that appears in TV shows or movies. You'll no longer have to look up the...x5s - a cross site scripting (XSS) testing tool: x5s aims to be a specialized testing tool which assists penetration testers in finding cross-site scripting hot-spots. By auto-injecting token valu...XNA Shooter Engine: The XNA Shooter Engine is a game engine for XNA designed specifically with first-person-shooter-style games in mind. It's being developed for an as...我的开发集: for my study .net csharpNew ReleasesAppFabric Caching Admin Tool: AppFabric Caching Admin Tool 1.1: System Requirements:.NET 4.0 RC AppFabric Caching Beta2 Test On:Win 7 (64x) Note: Must run as Administrator !!!ASP.NET Google Maps: ASP.NET Google Maps 0.1b: Project Description Extensible and easy to use, this is ASP.NET Bing Maps Control. Drag & Drop and is ready to go. You can configure map style, add...AutoFixture: Version 1.0.9 (RC1): This is Release Candidate 1 of AutoFixture 1.1. This release contains no known bugs. Compared to AutoFixture 1.0, it fixes some bugs that were dis...Camlex.NET: Camlex.NET 2.0: Camlex.NET 2.0 release New features Search by field id Support for native System.Guid type for values Search by lookup id and lookup value D...CloudCache - Distributed Cache Tier with Azure: v1.0.0.1: New Release on April 1st 2010 No this is not April fools a new release has made it's way out. Below are the changes: Removed dependency on Azure S...DigitallyCreated Utilities: DigitallyCreated Utilities v1.0.1: This release is the v1.0.1 version of DigitallyCreated Utilities. This update is highly recommended for all users of v1.0.0 as it fixes a critical ...Fast Console: Fast Console Alpha: Fast Console is an easy to use and learn programming language. Code example is found in the file TestFile.xml When you've written your code just sa...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts 3.0.6 beta Released: Hi, This release contains following enhancements. * Zooming feature has been enhanced with the new functionality of ZoomRectangle. Now, users...Graphing Calculator in Silverlight: 1.0.1: Graphing Calculator for Silverlight is written entirely in C# and is based on the Silverlight 3 release. I will soon release the full documentation...Home Access Plus+: v3.2.0.1: v3.2.0.1 Release Change Log: Fixed: Issue with & ampersand File Changes: ~/bin/CHS Extranet.dll ~/bin/CHS Extranet.pdb ~/Scripts/viewmode.jsIcarus Scene Engine: Icarus Professional 2 Alpha 2 v 1.10.329.913: Alpha release 2 of Icarus Professional. This release includes: IcarusX: The ActiveX-based browser control for rendering IPX projects online. Icaru...Line Counter: 1.5.2: The Line Counter is a tool to calculate lines of your code files. The tool was written in .NET 2.0. Line Counter 1.5.2 Added General Code Counter ...ManagedCv: ManagedCv v0.0.0.1: Win32Mavention: Mavention Simple Menu: SharePoint 2010 ships with a menu control that allows you to render a site menu using semantic markup. Using the Mavention Simple Menu you can do t...MDownloader: MDownloader-0.15.10.57200: Fixed uploading.com links detection; Fixed downloading from uploading.com; Fixed downloading from load.to; Fixed detecting incompatible sources;MixMail: V1: MixMailMixTemplate: v1: releaseMvcPager: MvcPager 1.3 for ASP.NET MVC 1.0: MvcPager 1.3 for ASP.NET MVC 1.0 compiled assembly files and demo projectsMvcPager: MvcPager 1.3 for ASP.NET MVC 2.0: MvcPager 1.3 for ASP.NET MVC 2.0 compiled assembly and demo projectsMvcUnity - ASP.NET MVC Dependency Injection: 2.1 Source Code: Drop 2.1 Source CodeNepomucenoBR Regex Learning Tool: NepomucenoBR Regex Learning Tool v0.1 alpha: This is the first version of this application. If you find any bug, please contact me at http://www.nepomucenobr.com.brNepomucenoBR Regex Learning Tool: NepomucenoBR Regex Learning Tool v0.1 source-code: This is the first version of this application. If you find any bug, please contact me at http://www.nepomucenobr.com.brocculo: occulo 0.2 binaries: Release build binaries instead of debug, should now work for other users. Fixed bit rotation and output filename bugs.occulo: occulo 0.2 source: Second source release. See binary release for changes.Python Design by Contract: v0.1: This is the inital release. I think it is working fine.SharePoint Labs: SPLab5002A-FRA-Level200: SPLab5002A-FRA-Level200 This SharePoint Lab will teach you how to modify CAML schema to have IntelliSense on Feature's GUID. Lab Language : French ...SharePoint Labs: SPLab5003A-FRA-Level100: SPLab5003A-FRA-Level100 This SharePoint Lab will teach you how to manually create a Feature, how to brand a Feature and how to incorporate ressourc...SharePoint Labs: SPLab5004A-FRA-Level100: SPLab5004A-FRA-Level100 This SharePoint Lab will teach you how to create a Feature within Visual Studio, how to brand it, how to incorporate ressou...SharePoint Labs: SPLab5005A-FRA-Level100: SPLab5005A-FRA-Level100 This SharePoint Lab will teach you how to create a Feature within Visual Studio, how to brand it, how to incorporate ressou...SSIS ReportGeneratorTask: Version 1.53: Some bugfixes to version 1.52 beta Server Report properties can be displayed. Snapshots can be created. Screenshots of the planned version 1.53 ca...TemporalSQL: April 2010: Initial set of prototypes demonstrating temporal patterns, queries, and functions in SQL ServerTortoiseHg: TortoiseHg 1.0.1: TortoiseHg 1.0.1 is a bug fix release. We recommend all users upgrade to this release. http://bitbucket.org/tortoisehg/stable/wiki/ReleaseNotes#t...Tribe.Cache: Tribe.Cache Alpha: Functional Alpha Release - Do not use in productionTS3QueryLib.Net: TS3QueryLib.Net Version 0.21.15.0: Changelog Added class "ServerListItemBase" which is used in the new method "GetServerListShort" of QueryRunner class. (Change of Beta 21) Added ...UDC indexes parser: Runtime Binary Alpha 1: First alpha versionVisual Studio DSite: Text To Binary (Visual C++ 2008): A simple c program that can convert text to binary. Source code only.x5s - a cross site scripting (XSS) testing tool: x5s 1.0 beta: PLACEHOLDER (coming soon)XNA Shooter Engine: GDK Tools 0.1.0.0: This is a small, very early release of the GDK Tools. The only included tool is Input Map Editor.XPath Visualizer: XPathVisualizer v1.2: Last updated 1 April 2010. This is not a joke! includes new features: Ctrl-S shortcut key for Saving the XML file Ctrl-F shortcut for re-form...すとれおじさん(仮): すとれおじさん β 0.01: とりあえず公開のバージョンです。 中途半端な機能がいっぱいあります。Most Popular ProjectsRawrWBFS ManagerASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitAJAX Control ToolkitWindows Presentation Foundation (WPF)ASP.NETLiveUpload to FacebookMicrosoft SQL Server Community & SamplesMost Active ProjectsRawrGraffiti CMSBase Class LibrariesjQuery Library for SharePoint Web ServicesBlogEngine.NETMicrosoft Biology FoundationN2 CMSLINQ to TwitterManaged Extensibility FrameworkFarseer Physics Engine

    Read the article

  • CodePlex Daily Summary for Monday, March 05, 2012

    CodePlex Daily Summary for Monday, March 05, 2012Popular ReleasesSimple Injector: Simple Injector v1.4.1: This release adds two small improvements to the SimpleInjector.Extensions.dll. No changes have been made to the core library. New features and improvements in this release for the SimpleInjector.Extensions.dll The RegisterManyForOpenGeneric extension methods now accept non-generic decorator, as long as they implement the given open generic service type. GetTypesToRegister methods added to the OpenGenericBatchRegistrationExtensions class which allows to customize the behavior. Note that the...SQL Scriptz Runner: Application: Scriptz Runner source code and applicationCommonLibrary: Code: CodePowerGUI Visual Studio Extension: PowerGUI VSX 1.5.2: Added support for PowerGUI 3.2.Path Copy Copy: 10.0: New version with the following biggest new features: Regular expression support in custom commands (see this work item) Import/export feature for custom commands (see this work item) This version also addresses the following work items: 11353 11348 This version is a recommended upgrade for all users. Note: new custom commands that user regular expressions won't be compatible with earlier versions (if installed by registry manipulation or via network installation), but everything else is ...VidCoder: 1.3.1: Updated HandBrake core to 0.9.6 release (svn 4472). Removed erroneous "None" container choice. Change some logic and help text to stop assuming you have to pick the VIDEO_TS folder for a DVD scan. This should make previewing DVD titles on the Queue Multiple Titles window possible when you've picked the root DVD directory.ASP.NET MVC Framework - Abstracting Data Annotations, HTML5, Knockout JS techs: Version 1.0: Please download the source code. I am not associating any dll for release.ExtAspNet: ExtAspNet v3.1.0: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-03-04 v3.1.0 -??Hidden???????(〓?〓)。 -?PageManager??...AcDown????? - Anime&Comic Downloader: AcDown????? v3.9.1: ?? ●AcDown??????????、??、??????,????1M,????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。??????AcPlay?????,??????、????????????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDo...Windows Phone Commands for VS2010: Version 1.0: Initial Release Version 1.0 Connect from device or emulator (Monitors the connection) Show Device information (Plataform, build , version, avaliable memory, total memory, architeture Manager installed applications (Launch, uninstall and explorer isolate storage files) Manager core applications (Launch blocked applications from emulator (Office, Calculator, alarm, calendar , etc) Manager blocked settings from emulator (Airplane Mode, Celullar Network, Wifi, etc) Deploy and update ap...DNN Metro7 style Skin package: Metro7 style Skin for DotNetNuke 06.01.00: Changes on Version 06.01.00 Fixed issue on GraySmallTitle container, that breaks the layout Fixed issue on Blue Metro7 Skin where the Search, Login, Register, Date is missing Fixed issue with the Version numbers on the target file Fixed issue where the jQuery and jQuery-UI files not deleted on upgrade from Version 01.00.00 Added a internal page where the Image Slider would be replaces with a BannerPaneMedia Companion: MC 3.433b Release: General More GUI tweaks (mostly imperceptible!) Updates for mc_com.exe TV The 'Watched' button has been re-instigated Added TV Menu sub-option to search ALL for new Episodes (includes locked shows) Movies Added 'Source' field (eg DVD, Bluray, HDTV), customisable in Advanced Preferences (try it out, let us know how it works!) Added HTML <<format>> tag with optional parameters for video container, source, and resolution (updated HTML tags to be added to Documentation shortly) Known Issu...Picturethrill: Version 2.3.2.0: Release includes Self-Update feature for Picturethrill. What that means for users is that they are always guaranteed to have a fresh copy of Picturethrill on their computers with all latest fixes. When Picturethrill adds a new website to get pictures from, you will get it too!Simple MVVM Toolkit for Silverlight, WPF and Windows Phone: Simple MVVM Toolkit v3.0.0.0: Added support for Silverlight 5.0 and Windows Phone 7.1. Upgraded project templates and samples. Upgraded installer. There are some new prerequisites required for this version, namely Silverlight 5 Tools, Expression Blend Preview for Silverlight 5 (until the SDK is released), Windows Phone 7.1 SDK. Because it is in the experimental band, I have also removed the dependency on the Silverlight Testing Framework. You can use it if you wish, but the Ria Services project template no longer uses ...CODE Framework: 4.0.20301: The latest version adds a number of new features to the WPF system (such as stylable and testable messagebox support) as well as various new features throughout the system (especially in the Utilities namespace).MyRouter (Virtual WiFi Router): MyRouter 1.0.2 (Beta): A friendlier User Interface. A logger file to catch exceptions so you may send it to use to improve and fix any bugs that may occur. A feedback form because we always love hearing what you guy's think of MyRouter. Check for update menu item for you to stay up to date will the latest changes. Facebook fan page so you may spread the word and share MyRouter with friends and family And Many other exciting features were sure your going to love!WPF Sound Visualization Library: WPF SVL 0.3 (Source, Binaries, Examples, Help): Version 0.3 of WPFSVL. This includes three new controls: an equalizer, a digital clock, and a time editor.Orchard Project: Orchard 1.4: Please read our release notes for Orchard 1.4: http://docs.orchardproject.net/Documentation/Orchard-1-4-Release-NotesNetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.15: 3.6.0.15 28-Feb-2012 • Fix: The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it is in the Faulted state. Work Item 10435: http://netsqlazman.codeplex.com/workitem/10435 • Fix: Made StorageCache thread safe. Thanks to tangrl. • Fix: Members property of SqlAzManApplicationGroup is not functioning. Thanks to tangrl. Work Item 10267: http://netsqlazman.codeplex.com/workitem/10267 • Fix: Indexer are making database calls. Thanks to t...SCCM Client Actions Tool: Client Actions Tool v1.1: SCCM Client Actions Tool v1.1 is the latest version. It comes with following changes since last version: Added stop button to stop the ongoing process. Added action "Query update status". Added option "saveOnlineComputers" in config.ini to enable saving list of online computers from last session. Default value for "LatestClientVersion" set to SP2 R3 (4.00.6487.2157). Wuauserv service manual startup mode is considered healthy on Windows 7. Errors are now suppressed in checkReleases...New ProjectsAbsolute Risk Game: Risk Game is a classic "World Domination Risk" game where you try to conquer the world.Architecture Document Catalog: The Architecture Document Catalog is a catalog for various documentation used by software solution architects. The initial architecture framework supported is from Rozanski and Woods, also called Viewpoints and Perspectives. It's in C#, Silverlight, WCF RIA Services.Background Worker Job Execution & Job Scheduler: Background worker is a multi-threaded job execution and scheduling engine. Very similar to Quartz, but with a slightly different focus.BWOS: preparing a new operating system. and name BWOS. CommonLibrary: Common modules and componentesCustomizeTool: CustomizeToolData Foundry: Data Foundry is a Swiss army knife for database administrator. With capabilities to cross reference check data between tables to find orphaned data and table relations.Data Grid Extensions: Modular extensions for the DataGrid control. Attach filtering capabilities to your existing DataGrid.dk.Helper: dk.Helper makes it easier for tribal wars game players (divoke-kmene.sk) to manage common activities in game. It's developed in C#.ExEn: ExEn is a high-performance implementation of a subset of the XNA API that runs on Silverlight, iOS and Android.Extended Methods for .NET: Developed in VB.NET, this DLL includes some functions I came across over the internet. Originally they were functions. I made some changes to suit my work and recreated them as Extended methods. Currently includes two extended methods for the Image class and one extended method for the String class. I will update it with new methods as I go along.GestionarComenzi: Gestioneaza Comenzi - Firma transporturiGIS Library Management: GIS Library Management System.iFinity Cache Master for DotNetNuke: The iFinity DotNetNuke Cache Master is an Administration module for DotNetNuke. This module allows administrators to inspect the current contents of the ASP.NET Cache in their running DotNetNuke installation. A very useful tool for developers working with the DotNetNuke cache.intelliEssay Document Format Checker: This is a product that tries to check a document's format to see if it conforms to certain given standard.Just T[he]IP: "Just The IP" leverages the Bing API v2 and the ip: Bing Advanced Operator to list the other web sites hosted at this IP Address (that are indexed by Bing).KTool: Ktool is a tool for learning japanese kanji. This program is still in developed. Current version can only display and find a kanji word (Press ctrl-F on main screen for search). Developed in WPF, .NET 4.0Lab Checker: Lab Checker makes it easy for teacher to check students' programs. It allows a student to test his program on a set of test cases which eliminates the need to run the program manually.MeoBox Vera Control Plugin: This is my project for creating a plugin to control my MeoBox ( Scientific Atlanta ) using Vera ( www.micasaverde.com ) Should work with other TV2Client boxesNUnitTestHelper: NUnitTestHelper is a helper class for developing unit tests for C# applications with NUnit framework. This helper library provides set of classes and method which can be used for accomplishing faster unit test development for any kind of project. NUnitTestBase – Supports basic functionality for writing a unit test with NUnit framework. This base class provides built in support for mocking framework. This version provides support for Rhino mocking framework.OnlineExam: Common online examination system based on Asp.net MVC3 can used for knowledge test, I/Q test, college test or most other test project.Orchard Code Generation Extensions: Code Generation Extensions module for Orchard.OrchardTranstion_CN: Orchar?? OsAvatar: to next step showOwnTools: About my toolsPhone Finder App: PhoneFinder will be a Windows Phone platform app alternative for finding a lost or stolen phone. Our plan is to have additional functionality the Windows software does not include. It will be developed using ASP.NET MVC, SQL Server, C#, etc.PoshChat: PoshChat is a client/server chat program written in PowerShell. Supports multiple client connections to a single server to chat.Process Attachment And Secure Text: ProcessAttachmentAndSecureText is an Exchange Transport Agent DLL and associated ASPX page. It is used to 1) strip out attachments and send them to an upload portal, and 2) to strip out text from emails and send it to an upload portal. It is currently coded for Exchange 2010Resource Viewer - Visual Studio Extension: Simply put the “resource viewer extension” enables you to visually view your ResourceDictionary. To open it go to: View – Other Windows – Resource Viewer. When working with WPF/Silverlight you put your reusable resources in a common ResourceDictionary, those resources might be of type Style, SolidColorBrush, DrawingBrush, BitmapImage and more. The problems starts when you have that ResourceDictionary you have no way to see how your resources look like, making the work process (of both t...Scumm XNA: Scumm XNA is an engine that runs old school LucasArts graphical adventure games. It is written completely in C# and will run on PC, Xbox 360 and Windows phone. The code is inspired by ScummVM but this project is not a port, it is a complete rewrite in order to optimize the engine for the CLR. Of course, you will need to own the orinigal games in order to use it. Currently, I will focus my work on the great "Day of Tentacle" game specifically the CD version.SDX DataGrid: SDXDataGrid is a comprehensive data grid component for Microsoft .NET 3.5 web application developers. It is designed to ease the exhausting process of implementing the necessary code for sorting, navigation, grouping, searching and data editing in a data representation object.SQL Scriptz Runner: Features are : Drag And Drop script files Run a directory of script files Sql Script out put messages during execution Script passed or failed that are colored green and red (yellow for running) Stop on error option Open script on error option Run report with time taken for each uComponents Demo: A demo for the code of running uComponents in Umbraco siteUnixTable: UnixTable makes it easier to realize application to access database, with no code but only with visual instrument at runtime. You'll no longer have to write query or code to read table from database. It's developed in Visual Basic for .NET 4. VOA Player.NET: VOA Player is a lightweight windows client for listening VOA Special English. Because of GFW blocking it fetch RSS data via rss2proxy.appspot.com indirectly. User can view article page and listen MP3 stream. The project is a C# implementation of voaplayer.sinaapp.com.Windows Metafile Library: The library supports reading and writing WMF files. Source code is written in pure .NET from scratch following the Windows Metafile Format Specification.Windows Phone 7.1 + MicroFramework (a phone device as remote control): Windows Phone 7.1 + .Net MicroFramework (How use a windows phone device as remote control of a Fez Panda II/MicroFramework board). A windows phone device can be connected to a wifi network (for example a wifi router). If you have also a MicroFramework board connected to the wifi network; you can send some http rest commands from the windows phone device to the MicroFramework board. The microframework board, it must support the tcp/ip protocol through a connect shield. In this exaple it will...XBMC Cache Manager: A Windows Service to manage a shared XBMC MySQL database and shared cache folder.

    Read the article

  • Coherence - How to develop a custom push replication publisher

    - by cosmin.tudor(at)oracle.com
    CoherencePushReplicationDB.zipIn the example bellow I'm describing a way of developing a custom push replication publisher that publishes data to a database via JDBC. This example can be easily changed to publish data to other receivers (JMS,...) by performing changes to step 2 and small changes to step 3, steps that are presented bellow. I've used Eclipse as the development tool. To develop a custom push replication publishers we will need to go through 6 steps: Step 1: Create a custom publisher scheme class Step 2: Create a custom publisher class that should define what the publisher is doing. Step 3: Create a class data is performing the actions (publish to JMS, DB, etc ) for the custom publisher. Step 4: Register the new publisher against a ContentHandler. Step 5: Add the new custom publisher in the cache configuration file. Step 6: Add the custom publisher scheme class to the POF configuration file. All these steps are detailed bellow. The coherence project is attached and conclusions are presented at the end. Step 1: In the Coherence Eclipse project create a class called CustomPublisherScheme that should implement com.oracle.coherence.patterns.pushreplication.publishers.AbstractPublisherScheme. In this class define the elements of the custom-publisher-scheme element. For instance for a CustomPublisherScheme that looks like that: <sync:publisher> <sync:publisher-name>Active2-JDBC-Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:custom-publisher-scheme> <sync:jdbc-string>jdbc:oracle:thin:@machine-name:1521:XE</sync:jdbc-string> <sync:username>hr</sync:username> <sync:password>hr</sync:password> </sync:custom-publisher-scheme> </sync:publisher-scheme> </sync:publisher> the code is: package com.oracle.coherence; import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; import com.oracle.coherence.patterns.pushreplication.Publisher; import com.oracle.coherence.configuration.Configurable; import com.oracle.coherence.configuration.Mandatory; import com.oracle.coherence.configuration.Property; import com.oracle.coherence.configuration.parameters.ParameterScope; import com.oracle.coherence.environment.Environment; import com.tangosol.io.pof.PofReader; import com.tangosol.io.pof.PofWriter; import com.tangosol.util.ExternalizableHelper; @Configurable public class CustomPublisherScheme extends com.oracle.coherence.patterns.pushreplication.publishers.AbstractPublisherScheme { /** * */ private static final long serialVersionUID = 1L; private String jdbcString; private String username; private String password; public String getJdbcString() { return this.jdbcString; } @Property("jdbc-string") @Mandatory public void setJdbcString(String jdbcString) { this.jdbcString = jdbcString; } public String getUsername() { return username; } @Property("username") @Mandatory public void setUsername(String username) { this.username = username; } public String getPassword() { return password; } @Property("password") @Mandatory public void setPassword(String password) { this.password = password; } public Publisher realize(Environment environment, ClassLoader classLoader, ParameterScope parameterScope) { return new CustomPublisher(getJdbcString(), getUsername(), getPassword()); } public void readExternal(DataInput in) throws IOException { super.readExternal(in); this.jdbcString = ExternalizableHelper.readSafeUTF(in); this.username = ExternalizableHelper.readSafeUTF(in); this.password = ExternalizableHelper.readSafeUTF(in); } public void writeExternal(DataOutput out) throws IOException { super.writeExternal(out); ExternalizableHelper.writeSafeUTF(out, this.jdbcString); ExternalizableHelper.writeSafeUTF(out, this.username); ExternalizableHelper.writeSafeUTF(out, this.password); } public void readExternal(PofReader reader) throws IOException { super.readExternal(reader); this.jdbcString = reader.readString(100); this.username = reader.readString(101); this.password = reader.readString(102); } public void writeExternal(PofWriter writer) throws IOException { super.writeExternal(writer); writer.writeString(100, this.jdbcString); writer.writeString(101, this.username); writer.writeString(102, this.password); } } Step 2: Define what the CustomPublisher should basically do by creating a new java class called CustomPublisher that implements com.oracle.coherence.patterns.pushreplication.Publisher package com.oracle.coherence; import com.oracle.coherence.patterns.pushreplication.EntryOperation; import com.oracle.coherence.patterns.pushreplication.Publisher; import com.oracle.coherence.patterns.pushreplication.exceptions.PublisherNotReadyException; import java.io.BufferedWriter; import java.util.Iterator; public class CustomPublisher implements Publisher { private String jdbcString; private String username; private String password; private transient BufferedWriter bufferedWriter; public CustomPublisher() { } public CustomPublisher(String jdbcString, String username, String password) { this.jdbcString = jdbcString; this.username = username; this.password = password; this.bufferedWriter = null; } public String getJdbcString() { return this.jdbcString; } public String getUsername() { return username; } public String getPassword() { return password; } public void publishBatch(String cacheName, String publisherName, Iterator<EntryOperation> entryOperations) { DatabasePersistence databasePersistence = new DatabasePersistence( jdbcString, username, password); while (entryOperations.hasNext()) { EntryOperation entryOperation = (EntryOperation) entryOperations .next(); databasePersistence.databasePersist(entryOperation); } } public void start(String cacheName, String publisherName) throws PublisherNotReadyException { System.err .printf("Started: Custom JDBC Publisher for Cache %s with Publisher %s\n", new Object[] { cacheName, publisherName }); } public void stop(String cacheName, String publisherName) { System.err .printf("Stopped: Custom JDBC Publisher for Cache %s with Publisher %s\n", new Object[] { cacheName, publisherName }); } } In the publishBatch method from above we inform the publisher that he is supposed to persist data to a database: DatabasePersistence databasePersistence = new DatabasePersistence( jdbcString, username, password); while (entryOperations.hasNext()) { EntryOperation entryOperation = (EntryOperation) entryOperations .next(); databasePersistence.databasePersist(entryOperation); } Step 3: The class that deals with the persistence is a very basic one that uses JDBC to perform inserts/updates against a database. package com.oracle.coherence; import com.oracle.coherence.patterns.pushreplication.EntryOperation; import java.sql.*; import java.text.SimpleDateFormat; import com.oracle.coherence.Order; public class DatabasePersistence { public static String INSERT_OPERATION = "INSERT"; public static String UPDATE_OPERATION = "UPDATE"; public Connection dbConnection; public DatabasePersistence(String jdbcString, String username, String password) { this.dbConnection = createConnection(jdbcString, username, password); } public Connection createConnection(String jdbcString, String username, String password) { Connection connection = null; System.err.println("Connecting to: " + jdbcString + " Username: " + username + " Password: " + password); try { // Load the JDBC driver String driverName = "oracle.jdbc.driver.OracleDriver"; Class.forName(driverName); // Create a connection to the database connection = DriverManager.getConnection(jdbcString, username, password); System.err.println("Connected to:" + jdbcString + " Username: " + username + " Password: " + password); } catch (ClassNotFoundException e) { e.printStackTrace(); } // driver catch (SQLException e) { e.printStackTrace(); } return connection; } public void databasePersist(EntryOperation entryOperation) { if (entryOperation.getOperation().toString() .equalsIgnoreCase(INSERT_OPERATION)) { insert(((Order) entryOperation.getPublishableEntry().getValue())); } else if (entryOperation.getOperation().toString() .equalsIgnoreCase(UPDATE_OPERATION)) { update(((Order) entryOperation.getPublishableEntry().getValue())); } } public void update(Order order) { String update = "UPDATE Orders set QUANTITY= '" + order.getQuantity() + "', AMOUNT='" + order.getAmount() + "', ORD_DATE= '" + (new SimpleDateFormat("dd-MMM-yyyy")).format(order .getOrdDate()) + "' WHERE SYMBOL='" + order.getSymbol() + "'"; System.err.println("UPDATE = " + update); try { Statement stmt = getDbConnection().createStatement(); stmt.execute(update); stmt.close(); } catch (SQLException ex) { System.err.println("SQLException: " + ex.getMessage()); } } public void insert(Order order) { String insert = "insert into Orders values('" + order.getSymbol() + "'," + order.getQuantity() + "," + order.getAmount() + ",'" + (new SimpleDateFormat("dd-MMM-yyyy")).format(order .getOrdDate()) + "')"; System.err.println("INSERT = " + insert); try { Statement stmt = getDbConnection().createStatement(); stmt.execute(insert); stmt.close(); } catch (SQLException ex) { System.err.println("SQLException: " + ex.getMessage()); } } public Connection getDbConnection() { return dbConnection; } public void setDbConnection(Connection dbConnection) { this.dbConnection = dbConnection; } } Step 4: Now we need to register our publisher against a ContentHandler. In order to achieve that we need to create in our eclipse project a new class called CustomPushReplicationNamespaceContentHandler that should extend the com.oracle.coherence.patterns.pushreplication.configuration.PushReplicationNamespaceContentHandler. In the constructor of the new class we define a new handler for our custom publisher. package com.oracle.coherence; import com.oracle.coherence.configuration.Configurator; import com.oracle.coherence.environment.extensible.ConfigurationContext; import com.oracle.coherence.environment.extensible.ConfigurationException; import com.oracle.coherence.environment.extensible.ElementContentHandler; import com.oracle.coherence.patterns.pushreplication.PublisherScheme; import com.oracle.coherence.environment.extensible.QualifiedName; import com.oracle.coherence.patterns.pushreplication.configuration.PushReplicationNamespaceContentHandler; import com.tangosol.run.xml.XmlElement; public class CustomPushReplicationNamespaceContentHandler extends PushReplicationNamespaceContentHandler { public CustomPushReplicationNamespaceContentHandler() { super(); registerContentHandler("custom-publisher-scheme", new ElementContentHandler() { public Object onElement(ConfigurationContext context, QualifiedName qualifiedName, XmlElement xmlElement) throws ConfigurationException { PublisherScheme publisherScheme = new CustomPublisherScheme(); Configurator.configure(publisherScheme, context, qualifiedName, xmlElement); return publisherScheme; } }); } } Step 5: Now we should define our CustomPublisher in the cache configuration file according to the following documentation. <cache-config xmlns:sync="class:com.oracle.coherence.CustomPushReplicationNamespaceContentHandler" xmlns:cr="class:com.oracle.coherence.environment.extensible.namespaces.InstanceNamespaceContentHandler"> <caching-schemes> <sync:provider pof-enabled="false"> <sync:coherence-provider /> </sync:provider> <caching-scheme-mapping> <cache-mapping> <cache-name>publishing-cache</cache-name> <scheme-name>distributed-scheme-with-publishing-cachestore</scheme-name> <autostart>true</autostart> <sync:publisher> <sync:publisher-name>Active2 Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:remote-cluster-publisher-scheme> <sync:remote-invocation-service-name>remote-site1</sync:remote-invocation-service-name> <sync:remote-publisher-scheme> <sync:local-cache-publisher-scheme> <sync:target-cache-name>publishing-cache</sync:target-cache-name> </sync:local-cache-publisher-scheme> </sync:remote-publisher-scheme> <sync:autostart>true</sync:autostart> </sync:remote-cluster-publisher-scheme> </sync:publisher-scheme> </sync:publisher> <sync:publisher> <sync:publisher-name>Active2-Output-Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:stderr-publisher-scheme> <sync:autostart>true</sync:autostart> <sync:publish-original-value>true</sync:publish-original-value> </sync:stderr-publisher-scheme> </sync:publisher-scheme> </sync:publisher> <sync:publisher> <sync:publisher-name>Active2-JDBC-Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:custom-publisher-scheme> <sync:jdbc-string>jdbc:oracle:thin:@machine_name:1521:XE</sync:jdbc-string> <sync:username>hr</sync:username> <sync:password>hr</sync:password> </sync:custom-publisher-scheme> </sync:publisher-scheme> </sync:publisher> </cache-mapping> </caching-scheme-mapping> <!-- The following scheme is required for each remote-site when using a RemoteInvocationPublisher --> <remote-invocation-scheme> <service-name>remote-site1</service-name> <initiator-config> <tcp-initiator> <remote-addresses> <socket-address> <address>localhost</address> <port>20001</port> </socket-address> </remote-addresses> <connect-timeout>2s</connect-timeout> </tcp-initiator> <outgoing-message-handler> <request-timeout>5s</request-timeout> </outgoing-message-handler> </initiator-config> </remote-invocation-scheme> <!-- END: com.oracle.coherence.patterns.pushreplication --> <proxy-scheme> <service-name>ExtendTcpProxyService</service-name> <acceptor-config> <tcp-acceptor> <local-address> <address>localhost</address> <port>20002</port> </local-address> </tcp-acceptor> </acceptor-config> <autostart>true</autostart> </proxy-scheme> </caching-schemes> </cache-config> As you can see in the red-marked text from above I've:       - set new Namespace Content Handler       - define the new custom publisher that should work together with other publishers like: stderr and remote publishers in our case. Step 6: Add the com.oracle.coherence.CustomPublisherScheme to your custom-pof-config file: <pof-config> <user-type-list> <!-- Built in types --> <include>coherence-pof-config.xml</include> <include>coherence-common-pof-config.xml</include> <include>coherence-messagingpattern-pof-config.xml</include> <include>coherence-pushreplicationpattern-pof-config.xml</include> <!-- Application types --> <user-type> <type-id>1901</type-id> <class-name>com.oracle.coherence.Order</class-name> <serializer> <class-name>com.oracle.coherence.OrderSerializer</class-name> </serializer> </user-type> <user-type> <type-id>1902</type-id> <class-name>com.oracle.coherence.CustomPublisherScheme</class-name> </user-type> </user-type-list> </pof-config> CONCLUSIONSThis approach allows for publishers to publish data to almost any other receiver (database, JMS, MQ, ...). The only thing that needs to be changed is the DatabasePersistence.java class that should be adapted to the chosen receiver. Only minor changes are needed for the rest of the code (to publishBatch method from CustomPublisher class).

    Read the article

< Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >