Search Results

Search found 2678 results on 108 pages for 'lcd fire'.

Page 83/108 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Slower/cached Linux file system required

    - by Chopper3
    I know it sounds odd but I need a slower or cached filesystem. I have a lot of firewalls that are syslog'ing their data to a pair of Linux VMs which write these files to their 'local' (actually FC SAN attached) ext3-formatted disks and also forward the messages to our Splunk servers. The problem is that the syslog server is writing these syslog messages as hundreds, sometimes thousands, of tiny ~4k writes per second back to our FC SAN - which can handle this workload right now but our FW traffic's going to be growing by at least a factor of 5000% (really) in coming months and that'll be a pain for the SAN, I want to fix the root cause before it's a problem. So I need some help figuring out a way of getting these writes cached or held-off in some way from the 'physical' disks so that the VMs fire off larger, but less frequent, writes - there's no way of avoiding these writes but there's no need for it to do so many tiny ones. I've looked at the various ext3 options, setting noatime and nodiratime but that's not made much of a dent in the problem. Obviously I'm investigating other file systems but thought I'd throw this out in case others have the same problem in the future. Oh and I can't just forward these messages to Splunk, our firewall team insist they're in their original format for diag purposes.

    Read the article

  • Multi Monitor Setup Problems

    - by Shamballa
    I have Ubuntu 10.04 LTS - the Lucid Lynx. I have until recently been using a nVida Graphics card (NVIDIA GeForce 9800 GT) with two monitors attached, this all worked fine and dandy. A couple of days ago I bought two new identical LCD monitors for a multi monitor setup and two ATI graphics cards (ATI Sapphire Radeon HD5450). NOTE *All monitors work fine in Windows XP, 2k, Vista and 7 After I had booted into Ubuntu only one display came on, that I kind of expected anyway, then I removed the driver for the nVidia card and downloaded the ATI version which gave me the ATI Catalyst Control Center - in that only two of the displays were showing the third was disabled and showing unknown driver. I enabled the third monitor that stated "Unkown Driver" and had to reboot, upon reboot none of the displays work. I restarted and booted up into recovery mode and from now that is only what I can get into using a failsafe driver. It seems according to the log that a server is already active for Display 0 and I have to remove /tmp/.X0-lock and start again. This is what the log file is saying: Fatal Server Error Server is already active for display 0 if this server is no longer running, remove /tmp/.X0-lock and start again. (WW) xf86 closeconsole: KDSETMODE failed: Bad file descriptor (WW) xf86 closeconsole: VT_GETMODE failed: Bad file descriptor (WW) xf86 closeconsole: VT_GETSTATE failed: Bad file descriptor ddxSigGiveUp: closing log I have tried looking at my xorg.config file but unfortunately I have not really got a clue as to how it "should" be - I have tried regenerating it using this command from a terminal: sudo dpkg-reconfigure -phigh xserver-xorg but that had no effect so I am currently stuck in failsafe driver mode but two monitors are active but are mirroring each other. I hope that this is not to long - looking back I have been going on a bit! but I am just trying to explain as much as I can... I have asked this on Linuxquestions but nobody seems to know either or at least I have not had any responses. Could some kind soul please help explain what I can do from here? I would be eternally grateful. Chris * Update * Removing xorg.conf does nothing other than allowing me to use only two monitors - using the command: sudo aticonfig --initial generates the xorg.conf file below: but does not work either - I just get two monitors... Section "ServerLayout" Identifier "aticonfig Layout" Screen 0 "aticonfig-Screen[0]-0" 0 0 EndSection Section "Files" EndSection Section "Module" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" Monitor "aticonfig-Monitor[0]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection I have tried using this command from a thread on the Ubuntu Forums with a question similar to mine: sudo aticonfig --initial=dual-head --adapter=all Generated xorg.conf file Section "ServerLayout" Identifier "aticonfig Layout" Screen 0 "aticonfig-Screen[0]-0" 0 0 Screen "aticonfig-Screen[0]-1" RightOf "aticonfig-Screen[0]-0" Screen "aticonfig-Screen[1]-0" RightOf "aticonfig-Screen[0]-1" Screen "aticonfig-Screen[1]-1" RightOf "aticonfig-Screen[1]-0" EndSection Section "Files" EndSection Section "Module" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-1" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Monitor" Identifier "aticonfig-Monitor[1]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Monitor" Identifier "aticonfig-Monitor[1]-1" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "aticonfig-Device[0]-1" Driver "fglrx" BusID "PCI:1:0:0" Screen 1 EndSection Section "Device" Identifier "aticonfig-Device[1]-0" Driver "fglrx" BusID "PCI:2:0:0" EndSection Section "Device" Identifier "aticonfig-Device[1]-1" Driver "fglrx" BusID "PCI:2:0:0" Screen 1 EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" Monitor "aticonfig-Monitor[0]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "aticonfig-Screen[0]-1" Device "aticonfig-Device[0]-1" Monitor "aticonfig-Monitor[0]-1" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "aticonfig-Screen[1]-0" Device "aticonfig-Device[1]-0" Monitor "aticonfig-Monitor[1]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "aticonfig-Screen[1]-1" Device "aticonfig-Device[1]-1" Monitor "aticonfig-Monitor[1]-1" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection This upon reboot renders ALL monitors blank and I have to go into recovery mode and use a failsafe driver. This is so much harder than I thought it would be, I don't think Ubuntu likes ATI for multi (3) monitors or maybe the other way around. Can anyone help still?

    Read the article

  • Updating files with a Perforce trigger before submit [migrated]

    - by phantom-99w
    I understand that this question has, in essence, already been asked, but that question did not have an unequivocal answer, so please bear with me. Background: In my company, we use Perforce submission numbers as part of our versioning. Regardless of whether this is a correct method or not, that is how things are. Currently, many developers do separate submissions for code and documentation: first the code and then the documentation to update the client-facing docs with what the new version numbers should be. I would like to streamline this process. My thoughts are as follows: create a Perforce trigger (which runs on the server side) which scans the submitted documentation files (such as .txt) for a unique term (such as #####PERFORCE##CHANGELIST##NUMBER###ROFL###LOL###WHATEVER#####) and then replaces it with the value of what the change list would be when submitted. I already know how to determine this value. What I cannot figure out, is how or where to update the files. I have already determined that using the change-content trigger (whether possible or not), which "fire[s] after changelist creation and file transfer, but prior to committing the submit to the database", is the way to go. At this point the files need to exist somewhere on the server. How do I determine the (temporary?) location of these files from within, say, a Python script so that I can update or sed to replace the placeholder value with the intended value? The online documentation for Perforce which I have found so far have not been very explicit on whether this is possible or how the mechanics of a submission at this stage would work.

    Read the article

  • Software for RAID Failure Alerts?

    - by QF_Developer
    I have two 256 GB Samsung 840 Pro SSD disks in a RAID 1 array. I would like to receive a notification if one of the disks in the array fails. Can anybody recommend an application I can install on the server to fire an email if such an event occurs? Here are some additional specs: Supermicro X9SCM-IIF motherboard utilising the hardware RAID controller. OS = Windows 2012 Standard Also is it possible to simulate a disk failure by pulling it out of the bay? SSDs appear to fail close together when in a mirrored config so I'd like to know ASAP if one goes down so I can swap them out with minimum delay. UPDATE 26th June 2013 ------------------------ None of the software that ships with the Supermicro X9SCM-* motherboards offer support for RAID monitoring. As has been pointed out here, these boards are built on an Intel chipset for RAID and so I installed Intel Rapid Storage Technology that supports automated email notifications on RAID failure http://www.intel.com/support/chipsets/imsm/sb/cs-020784.htm One small issue, the software only allows you to send email notifications without SMTP authentication. There's a bunch of different workarounds here: http://communities.intel.com/thread/30771

    Read the article

  • Constant crashes in windows 7 64bit when playing games

    - by yx.
    I've tried everything I can possibly think of in trying to fix this problem and I'm totally out of ideas, so any help would be appreciated: The problem: whenever I fire up a game, it works for a short while with no problems and then it would crash. Either its a hard crash, forcing me to reboot, or windows would report that the display driver has stopped working and recovered. Here is a list of things I've already tried: Drivers - tried the latest drivers (catalyst 9.12) as well as the stock drivers that came with the video card. Also have the latest BIOS/chipset Memtest - Ran Memtest86+ overnight, had no problems, the windows diagnostic tool also does not find any problems. Overheating - Video card/cpu temperatures are well below peak (42 and 31 Celsius receptively) PSU Voltage - CPUID shows that the voltage levels are all above what they should be. The PSU itself is only roughly 16 months old and is a good model. HDD - No errors when checked GPU - Brand new (replaced previous card since I thought it was the problem, apparently not) Overclocking - Everything is at stock levels, memory voltage is set to manufacturer's standard Specs: Motherboard: ASUS P5Q Pro CPU: Core 2 Duo E8400 3.0 ghz OS: Windows 7 home premium 64 bit Memory: Mushkin Enhanced 4GB DDR2 GPU: Sapphire HD 5850 1GB PSU: SeaSonic M12 600W ATX12V DirectX: DX11 Event Viewer after a crash always has these logged: A fatal hardware error has occurred. Reported by component: Processor Core Error Source: Machine Check Exception Error Type: Bus/Interconnect Error Processor ID: 1 The details view of this entry contains further information. A fatal hardware error has occurred. Reported by component: Processor Core Error Source: Machine Check Exception Error Type: Bus/Interconnect Error Processor ID: 0 The details view of this entry contains further information. A previous card that I had (4850x2) also had these errors, so I changed video cards, but the same thing is happening.

    Read the article

  • Different fan behaviour in my laptop after upgrade, what to do now?

    - by student
    After upgrading from lubuntu 13.10 to 14.04 the fan of my laptop seems to run much more often than in 13.10. When it runs, it doesn't run continously but starts and stops every second. fwts fan results in Results generated by fwts: Version V14.03.01 (2014-03-27 02:14:17). Some of this work - Copyright (c) 1999 - 2014, Intel Corp. All rights reserved. Some of this work - Copyright (c) 2010 - 2014, Canonical. This test run on 12/05/14 at 21:40:13 on host Linux einstein 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64. Command: "fwts fan". Running tests: fan. fan: Simple fan tests. -------------------------------------------------------------------------------- Test 1 of 2: Test fan status. Test how many fans there are in the system. Check for the current status of the fan(s). PASSED: Test 1, Fan cooling_device0 of type Processor has max cooling state 10 and current cooling state 0. PASSED: Test 1, Fan cooling_device1 of type Processor has max cooling state 10 and current cooling state 0. PASSED: Test 1, Fan cooling_device2 of type LCD has max cooling state 15 and current cooling state 10. Test 2 of 2: Load system, check CPU fan status. Test how many fans there are in the system. Check for the current status of the fan(s). Loading CPUs for 20 seconds to try and get fan speeds to change. Fan cooling_device0 current state did not change from value 0 while CPUs were busy. Fan cooling_device1 current state did not change from value 0 while CPUs were busy. ADVICE: Did not detect any change in the CPU related thermal cooling device states. It could be that the devices are returning static information back to the driver and/or the fan speed is automatically being controlled by firmware using System Management Mode in which case the kernel interfaces being examined may not work anyway. ================================================================================ 3 passed, 0 failed, 0 warning, 0 aborted, 0 skipped, 0 info only. ================================================================================ 3 passed, 0 failed, 0 warning, 0 aborted, 0 skipped, 0 info only. Test Failure Summary ================================================================================ Critical failures: NONE High failures: NONE Medium failures: NONE Low failures: NONE Other failures: NONE Test |Pass |Fail |Abort|Warn |Skip |Info | ---------------+-----+-----+-----+-----+-----+-----+ fan | 3| | | | | | ---------------+-----+-----+-----+-----+-----+-----+ Total: | 3| 0| 0| 0| 0| 0| ---------------+-----+-----+-----+-----+-----+-----+ Here is the output of lsmod lsmod Module Size Used by i8k 14421 0 zram 18478 2 dm_crypt 23177 0 gpio_ich 13476 0 dell_wmi 12761 0 sparse_keymap 13948 1 dell_wmi snd_hda_codec_hdmi 46207 1 snd_hda_codec_idt 54645 1 rfcomm 69160 0 arc4 12608 2 dell_laptop 18168 0 bnep 19624 2 dcdbas 14928 1 dell_laptop bluetooth 395423 10 bnep,rfcomm iwldvm 232285 0 mac80211 626511 1 iwldvm snd_hda_intel 52355 3 snd_hda_codec 192906 3 snd_hda_codec_hdmi,snd_hda_codec_idt,snd_hda_intel snd_hwdep 13602 1 snd_hda_codec snd_pcm 102099 3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel snd_page_alloc 18710 2 snd_pcm,snd_hda_intel snd_seq_midi 13324 0 snd_seq_midi_event 14899 1 snd_seq_midi snd_rawmidi 30144 1 snd_seq_midi coretemp 13435 0 kvm_intel 143060 0 kvm 451511 1 kvm_intel snd_seq 61560 2 snd_seq_midi_event,snd_seq_midi joydev 17381 0 serio_raw 13462 0 iwlwifi 169932 1 iwldvm pcmcia 62299 0 snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi snd_timer 29482 2 snd_pcm,snd_seq lpc_ich 21080 0 cfg80211 484040 3 iwlwifi,mac80211,iwldvm yenta_socket 41027 0 pcmcia_rsrc 18407 1 yenta_socket pcmcia_core 23592 3 pcmcia,pcmcia_rsrc,yenta_socket binfmt_misc 17468 1 snd 69238 17 snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_hda_codec_idt,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device,snd_seq_midi soundcore 12680 1 snd parport_pc 32701 0 mac_hid 13205 0 ppdev 17671 0 lp 17759 0 parport 42348 3 lp,ppdev,parport_pc firewire_ohci 40409 0 psmouse 102222 0 sdhci_pci 23172 0 sdhci 43015 1 sdhci_pci firewire_core 68769 1 firewire_ohci crc_itu_t 12707 1 firewire_core ahci 25819 2 libahci 32168 1 ahci i915 783485 2 wmi 19177 1 dell_wmi i2c_algo_bit 13413 1 i915 drm_kms_helper 52758 1 i915 e1000e 254433 0 drm 302817 3 i915,drm_kms_helper ptp 18933 1 e1000e pps_core 19382 1 ptp video 19476 1 i915 I tried one answer to the similar question: loud fan on Ubuntu 14.04 and created a /etc/i8kmon.conf like the following: # Run as daemon, override with --daemon option set config(daemon) 1 # Automatic fan control, override with --auto option set config(auto) 1 # Status check timeout (seconds), override with --timeout option set config(timeout) 2 # Report status on stdout, override with --verbose option set config(verbose) 1 # Temperature thresholds: {fan_speeds low_ac high_ac low_batt high_batt} set config(0) {{0 0} -1 55 -1 55} set config(1) {{0 1} 50 60 55 65} set config(2) {{1 1} 55 80 60 85} set config(3) {{2 2} 70 128 75 128} With this setup the fan goes on even if the temperature is below 50 degree celsius (I don't see a pattern). However I get the impression that the CPU got's hotter in average than without this file. What changes from 13.10 to 14.04 may be responsible for this? If this is a bug, for which package I should report the bug?

    Read the article

  • How to disable my laptop's defective keyboard?

    - by Kolink
    I have an Alienware M17x R2, and after a couple years of use I started having problems with the keyboard: Intermittently, and without warning or apparent cause (even if I'm not touching the computer), the S key (and more recently the D key) will start firing incessant signals until I hit the offending key. I've been using a USB keyboard, which is conveniently exactly the right size to fit over the built-in keyboard without touching any buttons. However, even as I type the defective keyboard will fire its stream of signals. I have contacted Dell about this, twice in fact. The first time, I was just bluntly told that they don't ship replacement keyboards. The second time I was told that they did, but they were out of stock and they would contact me when they were in stock again. They haven't contacted me since. I can't seem to work out which device to disable in the device manager. Here's a screenshot of what I have: None of the keyboard devices have "Disable" in their context menus, only "Uninstall"... so you'll understand if I'm hesitant to try anything myself. Any advice on how to fix this?

    Read the article

  • XSL Template outputting massive chunk of text, rather than HTML. But only on one section

    - by Throlkim
    I'm having a slightly odd situation with an XSL template. Most of it outputs fine, but a certain for-each loop is causing me problems. Here's the XML: <area> <feature type="Hall"> <Heading><![CDATA[Hall]]></Heading> <Para><![CDATA[Communal gardens, pathway leading to PVCu double glazed communal front door to]]></Para> </feature> <feature type="Entrance Hall"> <Heading><![CDATA[Communal Entrance Hall]]></Heading> <Para><![CDATA[Plain ceiling, centre light fitting, fire door through to inner hallway, wood and glazed panelled front door to]]></Para> </feature> <feature type="Inner Hall"> <Heading><![CDATA[Inner Hall]]></Heading> <Para><![CDATA[Plain ceiling with pendant light fitting and covings, security telephone, airing cupboard housing gas boiler serving domestic hot water and central heating, telephone point, storage cupboard housing gas and electric meters, wooden panelled doors off to all rooms.]]></Para> </feature> <feature type="Lounge (Reception)" width="3.05" length="4.57" units="metre"> <Heading><![CDATA[Lounge (Reception)]]></Heading> <Para><![CDATA[15' 6" x 10' 7" (4.72m x 3.23m) Window to the side and rear elevation, papered ceiling with pendant light fitting and covings, two double panelled radiators, power points, wall mounted security entry phone, TV aerial point.]]></Para> </feature> <feature type="Kitchen" width="3.05" length="3.66" units="metre"> <Heading><![CDATA[Kitchen]]></Heading> <Para><![CDATA[12' x 10' (3.66m x 3.05m) Double glazed window to the rear elevation, textured ceiling with strip lighting, range of base and wall units in Beech with brushed aluminium handles, co-ordinated working surfaces with inset stainless steel sink with mixer taps over, co-ordinated tiled splashbacks, gas and electric cooker points, large storage cupboard with shelving, power points.]]></Para> </feature> <feature type="Entrance Porch"> <Heading><![CDATA[Balcony]]></Heading> <Para><![CDATA[Views across the communal South facing garden, wrought iron balustrade.]]></Para> </feature> <feature type="Bedroom" width="3.35" length="3.96" units="metre"> <Heading><![CDATA[Bedroom One]]></Heading> <Para><![CDATA[13' 6" x 11' 5" (4.11m x 3.48m) Double glazed windows to the front and side elevations, papered ceiling with pendant light fittings and covings, single panelled radiator, power points, telephone point, security entry phone.]]></Para> </feature> <feature type="Bedroom" width="3.05" length="3.35" units="metre"> <Heading><![CDATA[Bedroom Two]]></Heading> <Para><![CDATA[11' 4" x 10' 1" (3.45m x 3.07m) Double glazed window to the front elevation, plain ceiling with centre light fitting and covings, power points.]]></Para> </feature> <feature type="bathroom"> <Heading><![CDATA[Bathroom]]></Heading> <Para><![CDATA[Obscure double glazed window to the rear elevation, textured ceiling with centre light fitting and extractor fan, suite in white comprising of low level WC, wall mounted wash hand basin and walk in shower housing 'Triton T80' electric shower, co-ordinated tiled splashbacks.]]></Para> </feature> </area> And here's the section of my template that processes it: <xsl:for-each select="area"> <li> <xsl:for-each select="feature"> <li> <h5> <xsl:value-of select="Heading"/> </h5> <xsl:value-of select="Para"/> </li> </xsl:for-each> </li> </xsl:for-each> And here's the output: Hall Communal gardens, pathway leading to PVCu double glazed communal front door to Communal Entrance Hall Plain ceiling, centre light fitting, fire door through to inner hallway, wood and glazed panelled front door to Inner Hall Plain ceiling with pendant light fitting and covings, security telephone, airing cupboard housing gas boiler serving domestic hot water and central heating, telephone point, storage cupboard housing gas and electric meters, wooden panelled doors off to all rooms. Lounge (Reception) 15' 6" x 10' 7" (4.72m x 3.23m) Window to the side and rear elevation, papered ceiling with pendant light fitting and covings, two double panelled radiators, power points, wall mounted security entry phone, TV aerial point. Kitchen 12' x 10' (3.66m x 3.05m) Double glazed window to the rear elevation, textured ceiling with strip lighting, range of base and wall units in Beech with brushed aluminium handles, co-ordinated working surfaces with inset stainless steel sink with mixer taps over, co-ordinated tiled splashbacks, gas and electric cooker points, large storage cupboard with shelving, power points. Balcony Views across the communal South facing garden, wrought iron balustrade. Bedroom One 13' 6" x 11' 5" (4.11m x 3.48m) Double glazed windows to the front and side elevations, papered ceiling with pendant light fittings and covings, single panelled radiator, power points, telephone point, security entry phone. Bedroom Two 11' 4" x 10' 1" (3.45m x 3.07m) Double glazed window to the front elevation, plain ceiling with centre light fitting and covings, power points. Bathroom Obscure double glazed window to the rear elevation, textured ceiling with centre light fitting and extractor fan, suite in white comprising of low level WC, wall mounted wash hand basin and walk in shower housing 'Triton T80' electric shower, co-ordinated tiled splashbacks. For reference, here's the entire XSLT: http://pastie.org/private/eq4gjvqoc1amg9ynyf6wzg The rest of it all outputs fine - what am I missing from the above section?

    Read the article

  • audio onprogress in chrome not working

    - by user351709
    Hi I am having a problem getting onprogress event for the audio tag working on chrome. it seems to work on fire fox. http://www.scottandrew.com/pub/html5audioplayer/ works on chrome but there is no progress bar update. When I copy the code and change the src to a .wav file and run it on fire fox it works perfectly. <style type="text/css"> #content { clear:both; width:60%; } .player_control { float:left; margin-right:5px; height: 20px; } #player { height:22px; } #duration { width:400px; height:15px; border: 2px solid #50b; } #duration_background { width:400px; height:15px; background-color:#ddd; } #duration_bar { width:0px; height:13px; background-color:#bbd; } #loader { width:0px; height:2px; } .style1 { height: 35px; } </style> <script type="text/javascript"> var audio_duration; var audio_player; function pageLoaded() { audio_player = $("#aplayer").get(0); //get the duration audio_duration = audio_player.duration; $('#totalTime').text(formatTimeSeconds(audio_player.duration)); //set the volume } function update(){ //get the duration of the player dur = audio_player.duration; time = audio_player.currentTime; fraction = time/dur; percent = (fraction*100); wrapper = document.getElementById("duration_background"); new_width = wrapper.offsetWidth*fraction; document.getElementById("duration_bar").style.width = new_width + "px"; $('#currentTime').text(formatTimeSeconds(audio_player.currentTime)); $('#totalTime').text(formatTimeSeconds(audio_player.duration)); } function formatTimeSeconds(time) { var minutes = Math.floor(time / 60); var seconds = "0" + (Math.floor(time) - (minutes * 60)).toString(); if (isNaN(minutes) || isNaN(seconds)) { return "0:00"; } var Strseconds = seconds.substr(seconds.length - 2); return minutes + ":" + Strseconds; } function playClicked(element){ //get the state of the player if(audio_player.paused) { audio_player.play(); newdisplay = "||"; }else{ audio_player.pause(); newdisplay = ">"; } $('#totalTime').text(formatTimeSeconds(audio_player.duration)); element.value = newdisplay; } function trackEnded(){ //reset the playControl to 'play' document.getElementById("playControl").value=">"; } function durationClicked(event){ //get the position of the event clientX = event.clientX; left = event.currentTarget.offsetLeft; clickoffset = clientX - left; percent = clickoffset/event.currentTarget.offsetWidth; duration_seek = percent*audio_duration; document.getElementById("aplayer").currentTime=duration_seek; } function Progress(evt){ $('#progress').val(Math.round(evt.loaded / evt.total * 100)); var width = $('#duration_background').css('width') $('#loader').css('width', evt.loaded / evt.total * width.replace("px","")); } function getPosition(name) { var obj = document.getElementById(name); var topValue = 0, leftValue = 0; while (obj) { leftValue += obj.offsetLeft; obj = obj.offsetParent; } finalvalue = leftValue; return finalvalue; } function SetValues() { var xPos = xMousePos; var divPos = getPosition("duration_background"); var divWidth = xPos - divPos; var Totalwidth = $('#duration_background').css('width').replace("px","") audio_player.currentTime = divWidth / Totalwidth * audio_duration; $('#duration_bar').css('width', divWidth); } </script> </head> <script type="text/javascript" src="js/MousePosition.js" ></script> <body onLoad="pageLoaded();"> <table> <tr> <td valign="bottom"><input id="playButton" type="button" onClick="playClicked(this);" value=">"/></td> <td colspan="2" class="style1" valign="bottom"> <div id='player'> <div id="duration" class='player_control' > <div id="duration_background" onClick="SetValues();"> <div id="loader" style="background-color: #00FF00; width: 0px;"></div> <div id="duration_bar" class="duration_bar"></div> </div> </div> </div> </td> </tr> <tr> <td> </td> <td> <span id="currentTime">0:00</span> </td> <td align="right" > <span id="totalTime">0:00</span> </td> </tr> </table> <audio id='aplayer' src='<%=getDownloadLink() %>' type="audio/ogg; codecs=vorbis" onProgress="Progress(event);" onTimeUpdate="update();" onEnded="trackEnded();" > <b>Your browser does not support the <code>audio</code> element. </b> </audio> </body>

    Read the article

  • Using the West Wind Web Toolkit to set up AJAX and REST Services

    - by Rick Strahl
    I frequently get questions about which option to use for creating AJAX and REST backends for ASP.NET applications. There are many solutions out there to do this actually, but when I have a choice - not surprisingly - I fall back to my own tools in the West Wind West Wind Web Toolkit. I've talked a bunch about the 'in-the-box' solutions in the past so for a change in this post I'll talk about the tools that I use in my own and customer applications to handle AJAX and REST based access to service resources using the West Wind West Wind Web Toolkit. Let me preface this by saying that I like things to be easy. Yes flexible is very important as well but not at the expense of over-complexity. The goal I've had with my tools is make it drop dead easy, with good performance while providing the core features that I'm after, which are: Easy AJAX/JSON Callbacks Ability to return any kind of non JSON content (string, stream, byte[], images) Ability to work with both XML and JSON interchangeably for input/output Access endpoints via POST data, RPC JSON calls, GET QueryString values or Routing interface Easy to use generic JavaScript client to make RPC calls (same syntax, just what you need) Ability to create clean URLS with Routing Ability to use standard ASP.NET HTTP Stack for HTTP semantics It's all about options! In this post I'll demonstrate most of these features (except XML) in a few simple and short samples which you can download. So let's take a look and see how you can build an AJAX callback solution with the West Wind Web Toolkit. Installing the Toolkit Assemblies The easiest and leanest way of using the Toolkit in your Web project is to grab it via NuGet: West Wind Web and AJAX Utilities (Westwind.Web) and drop it into the project by right clicking in your Project and choosing Manage NuGet Packages from anywhere in the Project.   When done you end up with your project looking like this: What just happened? Nuget added two assemblies - Westwind.Web and Westwind.Utilities and the client ww.jquery.js library. It also added a couple of references into web.config: The default namespaces so they can be accessed in pages/views and a ScriptCompressionModule that the toolkit optionally uses to compress script resources served from within the assembly (namely ww.jquery.js and optionally jquery.js). Creating a new Service The West Wind Web Toolkit supports several ways of creating and accessing AJAX services, but for this post I'll stick to the lower level approach that works from any plain HTML page or of course MVC, WebForms, WebPages. There's also a WebForms specific control that makes this even easier but I'll leave that for another post. So, to create a new standalone AJAX/REST service we can create a new HttpHandler in the new project either as a pure class based handler or as a generic .ASHX handler. Both work equally well, but generic handlers don't require any web.config configuration so I'll use that here. In the root of the project add a Generic Handler. I'm going to call this one StockService.ashx. Once the handler has been created, edit the code and remove all of the handler body code. Then change the base class to CallbackHandler and add methods that have a [CallbackMethod] attribute. Here's the modified base handler implementation now looks like with an added HelloWorld method: using System; using Westwind.Web; namespace WestWindWebAjax { /// <summary> /// Handler implements CallbackHandler to provide REST/AJAX services /// </summary> public class SampleService : CallbackHandler { [CallbackMethod] public string HelloWorld(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } } } Notice that the class inherits from CallbackHandler and that the HelloWorld service method is marked up with [CallbackMethod]. We're done here. Services Urlbased Syntax Once you compile, the 'service' is live can respond to requests. All CallbackHandlers support input in GET and POST formats, and can return results as JSON or XML. To check our fancy HelloWorld method we can now access the service like this: http://localhost/WestWindWebAjax/StockService.ashx?Method=HelloWorld&name=Rick which produces a default JSON response - in this case a string (wrapped in quotes as it's JSON): (note by default JSON will be downloaded by most browsers not displayed - various options are available to view JSON right in the browser) If I want to return the same data as XML I can tack on a &format=xml at the end of the querystring which produces: <string>Hello Rick. Time is: 11/1/2011 12:11:13 PM</string> Cleaner URLs with Routing Syntax If you want cleaner URLs for each operation you can also configure custom routes on a per URL basis similar to the way that WCF REST does. To do this you need to add a new RouteHandler to your application's startup code in global.asax.cs one for each CallbackHandler based service you create: protected void Application_Start(object sender, EventArgs e) { CallbackHandlerRouteHandler.RegisterRoutes<StockService>(RouteTable.Routes); } With this code in place you can now add RouteUrl properties to any of your service methods. For the HelloWorld method that doesn't make a ton of sense but here is what a routed clean URL might look like in definition: [CallbackMethod(RouteUrl="stocks/HelloWorld/{name}")] public string HelloWorld(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } The same URL I previously used now becomes a bit shorter and more readable with: http://localhost/WestWindWebAjax/HelloWorld/Rick It's an easy way to create cleaner URLs and still get the same functionality. Calling the Service with $.getJSON() Since the result produced is JSON you can now easily consume this data using jQuery's getJSON method. First we need a couple of scripts - jquery.js and ww.jquery.js in the page: <!DOCTYPE html> <html> <head> <link href="Css/Westwind.css" rel="stylesheet" type="text/css" /> <script src="scripts/jquery.min.js" type="text/javascript"></script> <script src="scripts/ww.jquery.min.js" type="text/javascript"></script> </head> <body> Next let's add a small HelloWorld example form (what else) that has a single textbox to type a name, a button and a div tag to receive the result: <fieldset> <legend>Hello World</legend> Please enter a name: <input type="text" name="txtHello" id="txtHello" value="" /> <input type="button" id="btnSayHello" value="Say Hello (POST)" /> <input type="button" id="btnSayHelloGet" value="Say Hello (GET)" /> <div id="divHelloMessage" class="errordisplay" style="display:none;width: 450px;" > </div> </fieldset> Then to call the HelloWorld method a little jQuery is used to hook the document startup and the button click followed by the $.getJSON call to retrieve the data from the server. <script type="text/javascript"> $(document).ready(function () { $("#btnSayHelloGet").click(function () { $.getJSON("SampleService.ashx", { Method: "HelloWorld", name: $("#txtHello").val() }, function (result) { $("#divHelloMessage") .text(result) .fadeIn(1000); }); });</script> .getJSON() expects a full URL to the endpoint of our service, which is the ASHX file. We can either provide a full URL (SampleService.ashx?Method=HelloWorld&name=Rick) or we can just provide the base URL and an object that encodes the query string parameters for us using an object map that has a property that matches each parameter for the server method. We can also use the clean URL routing syntax, but using the object parameter encoding actually is safer as the parameters will get properly encoded by jQuery. The result returned is whatever the result on the server method is - in this case a string. The string is applied to the divHelloMessage element and we're done. Obviously this is a trivial example, but it demonstrates the basics of getting a JSON response back to the browser. AJAX Post Syntax - using ajaxCallMethod() The previous example allows you basic control over the data that you send to the server via querystring parameters. This works OK for simple values like short strings, numbers and boolean values, but doesn't really work if you need to pass something more complex like an object or an array back up to the server. To handle traditional RPC type messaging where the idea is to map server side functions and results to a client side invokation, POST operations can be used. The easiest way to use this functionality is to use ww.jquery.js and the ajaxCallMethod() function. ww.jquery wraps jQuery's AJAX functions and knows implicitly how to call a CallbackServer method with parameters and parse the result. Let's look at another simple example that posts a simple value but returns something more interesting. Let's start with the service method: [CallbackMethod(RouteUrl="stocks/{symbol}")] public StockQuote GetStockQuote(string symbol) { Response.Cache.SetExpires(DateTime.UtcNow.Add(new TimeSpan(0, 2, 0))); StockServer server = new StockServer(); var quote = server.GetStockQuote(symbol); if (quote == null) throw new ApplicationException("Invalid Symbol passed."); return quote; } This sample utilizes a small StockServer helper class (included in the sample) that downloads a stock quote from Yahoo's financial site via plain HTTP GET requests and formats it into a StockQuote object. Lets create a small HTML block that lets us query for the quote and display it: <fieldset> <legend>Single Stock Quote</legend> Please enter a stock symbol: <input type="text" name="txtSymbol" id="txtSymbol" value="msft" /> <input type="button" id="btnStockQuote" value="Get Quote" /> <div id="divStockDisplay" class="errordisplay" style="display:none; width: 450px;"> <div class="label-left">Company:</div> <div id="stockCompany"></div> <div class="label-left">Last Price:</div> <div id="stockLastPrice"></div> <div class="label-left">Quote Time:</div> <div id="stockQuoteTime"></div> </div> </fieldset> The final result looks something like this:   Let's hook up the button handler to fire the request and fill in the data as shown: $("#btnStockQuote").click(function () { ajaxCallMethod("SampleService.ashx", "GetStockQuote", [$("#txtSymbol").val()], function (quote) { $("#divStockDisplay").show().fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, HH:mm EST")); }, onPageError); }); So we point at SampleService.ashx and the GetStockQuote method, passing a single parameter of the input symbol value. Then there are two handlers for success and failure callbacks.  The success handler is the interesting part - it receives the stock quote as a result and assigns its values to various 'holes' in the stock display elements. The data that comes back over the wire is JSON and it looks like this: { "Symbol":"MSFT", "Company":"Microsoft Corpora", "OpenPrice":26.11, "LastPrice":26.01, "NetChange":0.02, "LastQuoteTime":"2011-11-03T02:00:00Z", "LastQuoteTimeString":"Nov. 11, 2011 4:20pm" } which is an object representation of the data. JavaScript can evaluate this JSON string back into an object easily and that's the reslut that gets passed to the success function. The quote data is then applied to existing page content by manually selecting items and applying them. There are other ways to do this more elegantly like using templates, but here we're only interested in seeing how the data is returned. The data in the object is typed - LastPrice is a number and QuoteTime is a date. Note about the date value: JavaScript doesn't have a date literal although the JSON embedded ISO string format used above  ("2011-11-03T02:00:00Z") is becoming fairly standard for JSON serializers. However, JSON parsers don't deserialize dates by default and return them by string. This is why the StockQuote actually returns a string value of LastQuoteTimeString for the same date. ajaxMethodCallback always converts dates properly into 'real' dates and the example above uses the real date value along with a .formatDate() data extension (also in ww.jquery.js) to display the raw date properly. Errors and Exceptions So what happens if your code fails? For example if I pass an invalid stock symbol to the GetStockQuote() method you notice that the code does this: if (quote == null) throw new ApplicationException("Invalid Symbol passed."); CallbackHandler automatically pushes the exception message back to the client so it's easy to pick up the error message. Regardless of what kind of error occurs: Server side, client side, protocol errors - any error will fire the failure handler with an error object parameter. The error is returned to the client via a JSON response in the error callback. In the previous examples I called onPageError which is a generic routine in ww.jquery that displays a status message on the bottom of the screen. But of course you can also take over the error handling yourself: $("#btnStockQuote").click(function () { ajaxCallMethod("SampleService.ashx", "GetStockQuote", [$("#txtSymbol").val()], function (quote) { $("#divStockDisplay").fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, hh:mmt")); }, function (error, xhr) { $("#divErrorDisplay").text(error.message).fadeIn(1000); }); }); The error object has a isCallbackError, message and  stackTrace properties, the latter of which is only populated when running in Debug mode, and this object is returned for all errors: Client side, transport and server side errors. Regardless of which type of error you get the same object passed (as well as the XHR instance optionally) which makes for a consistent error retrieval mechanism. Specifying HttpVerbs You can also specify HTTP Verbs that are allowed using the AllowedHttpVerbs option on the CallbackMethod attribute: [CallbackMethod(AllowedHttpVerbs=HttpVerbs.GET | HttpVerbs.POST)] public string HelloWorld(string name) { … } If you're building REST style API's this might be useful to force certain request semantics onto the client calling. For the above if call with a non-allowed HttpVerb the request returns a 405 error response along with a JSON (or XML) error object result. The default behavior is to allow all verbs access (HttpVerbs.All). Passing in object Parameters Up to now the parameters I passed were very simple. But what if you need to send something more complex like an object or an array? Let's look at another example now that passes an object from the client to the server. Keeping with the Stock theme here lets add a method called BuyOrder that lets us buy some shares for a stock. Consider the following service method that receives an StockBuyOrder object as a parameter: [CallbackMethod] public string BuyStock(StockBuyOrder buyOrder) { var server = new StockServer(); var quote = server.GetStockQuote(buyOrder.Symbol); if (quote == null) throw new ApplicationException("Invalid or missing stock symbol."); return string.Format("You're buying {0} shares of {1} ({2}) stock at {3} for a total of {4} on {5}.", buyOrder.Quantity, quote.Company, quote.Symbol, quote.LastPrice.ToString("c"), (quote.LastPrice * buyOrder.Quantity).ToString("c"), buyOrder.BuyOn.ToString("MMM d")); } public class StockBuyOrder { public string Symbol { get; set; } public int Quantity { get; set; } public DateTime BuyOn { get; set; } public StockBuyOrder() { BuyOn = DateTime.Now; } } This is a contrived do-nothing example that simply echoes back what was passed in, but it demonstrates how you can pass complex data to a callback method. On the client side we now have a very simple form that captures the three values on a form: <fieldset> <legend>Post a Stock Buy Order</legend> Enter a symbol: <input type="text" name="txtBuySymbol" id="txtBuySymbol" value="GLD" />&nbsp;&nbsp; Qty: <input type="text" name="txtBuyQty" id="txtBuyQty" value="10" style="width: 50px" />&nbsp;&nbsp; Buy on: <input type="text" name="txtBuyOn" id="txtBuyOn" value="<%= DateTime.Now.ToString("d") %>" style="width: 70px;" /> <input type="button" id="btnBuyStock" value="Buy Stock" /> <div id="divStockBuyMessage" class="errordisplay" style="display:none"></div> </fieldset> The completed form and demo then looks something like this:   The client side code that picks up the input values and assigns them to object properties and sends the AJAX request looks like this: $("#btnBuyStock").click(function () { // create an object map that matches StockBuyOrder signature var buyOrder = { Symbol: $("#txtBuySymbol").val(), Quantity: $("#txtBuyQty").val() * 1, // number Entered: new Date() } ajaxCallMethod("SampleService.ashx", "BuyStock", [buyOrder], function (result) { $("#divStockBuyMessage").text(result).fadeIn(1000); }, onPageError); }); The code creates an object and attaches the properties that match the server side object passed to the BuyStock method. Each property that you want to update needs to be included and the type must match (ie. string, number, date in this case). Any missing properties will not be set but also not cause any errors. Pass POST data instead of Objects In the last example I collected a bunch of values from form variables and stuffed them into object variables in JavaScript code. While that works, often times this isn't really helping - I end up converting my types on the client and then doing another conversion on the server. If lots of input controls are on a page and you just want to pick up the values on the server via plain POST variables - that can be done too - and it makes sense especially if you're creating and filling the client side object only to push data to the server. Let's add another method to the server that once again lets us buy a stock. But this time let's not accept a parameter but rather send POST data to the server. Here's the server method receiving POST data: [CallbackMethod] public string BuyStockPost() { StockBuyOrder buyOrder = new StockBuyOrder(); buyOrder.Symbol = Request.Form["txtBuySymbol"]; ; int qty; int.TryParse(Request.Form["txtBuyQuantity"], out qty); buyOrder.Quantity = qty; DateTime time; DateTime.TryParse(Request.Form["txtBuyBuyOn"], out time); buyOrder.BuyOn = time; // Or easier way yet //FormVariableBinder.Unbind(buyOrder,null,"txtBuy"); var server = new StockServer(); var quote = server.GetStockQuote(buyOrder.Symbol); if (quote == null) throw new ApplicationException("Invalid or missing stock symbol."); return string.Format("You're buying {0} shares of {1} ({2}) stock at {3} for a total of {4} on {5}.", buyOrder.Quantity, quote.Company, quote.Symbol, quote.LastPrice.ToString("c"), (quote.LastPrice * buyOrder.Quantity).ToString("c"), buyOrder.BuyOn.ToString("MMM d")); } Clearly we've made this server method take more code than it did with the object parameter. We've basically moved the parameter assignment logic from the client to the server. As a result the client code to call this method is now a bit shorter since there's no client side shuffling of values from the controls to an object. $("#btnBuyStockPost").click(function () { ajaxCallMethod("SampleService.ashx", "BuyStockPost", [], // Note: No parameters - function (result) { $("#divStockBuyMessage").text(result).fadeIn(1000); }, onPageError, // Force all page Form Variables to be posted { postbackMode: "Post" }); }); The client simply calls the BuyStockQuote method and pushes all the form variables from the page up to the server which parses them instead. The feature that makes this work is one of the options you can pass to the ajaxCallMethod() function: { postbackMode: "Post" }); which directs the function to include form variable POST data when making the service call. Other options include PostNoViewState (for WebForms to strip out WebForms crap vars), PostParametersOnly (default), None. If you pass parameters those are always posted to the server except when None is set. The above code can be simplified a bit by using the FormVariableBinder helper, which can unbind form variables directly into an object: FormVariableBinder.Unbind(buyOrder,null,"txtBuy"); which replaces the manual Request.Form[] reading code. It receives the object to unbind into, a string of properties to skip, and an optional prefix which is stripped off form variables to match property names. The component is similar to the MVC model binder but it's independent of MVC. Returning non-JSON Data CallbackHandler also supports returning non-JSON/XML data via special return types. You can return raw non-JSON encoded strings like this: [CallbackMethod(ReturnAsRawString=true,ContentType="text/plain")] public string HelloWorldNoJSON(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } Calling this method results in just a plain string - no JSON encoding with quotes around the result. This can be useful if your server handling code needs to return a string or HTML result that doesn't fit well for a page or other UI component. Any string output can be returned. You can also return binary data. Stream, byte[] and Bitmap/Image results are automatically streamed back to the client. Notice that you should set the ContentType of the request either on the CallbackMethod attribute or using Response.ContentType. This ensures the Web Server knows how to display your binary response. Using a stream response makes it possible to return any of data. Streamed data can be pretty handy to return bitmap data from a method. The following is a method that returns a stock history graph for a particular stock over a provided number of years: [CallbackMethod(ContentType="image/png",RouteUrl="stocks/history/graph/{symbol}/{years}")] public Stream GetStockHistoryGraph(string symbol, int years = 2,int width = 500, int height=350) { if (width == 0) width = 500; if (height == 0) height = 350; StockServer server = new StockServer(); return server.GetStockHistoryGraph(symbol,"Stock History for " + symbol,width,height,years); } I can now hook this up into the JavaScript code when I get a stock quote. At the end of the process I can assign the URL to the service that returns the image into the src property and so force the image to display. Here's the changed code: $("#btnStockQuote").click(function () { var symbol = $("#txtSymbol").val(); ajaxCallMethod("SampleService.ashx", "GetStockQuote", [symbol], function (quote) { $("#divStockDisplay").fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, hh:mmt")); // display a stock chart $("#imgStockHistory").attr("src", "stocks/history/graph/" + symbol + "/2"); },onPageError); }); The resulting output then looks like this: The charting code uses the new ASP.NET 4.0 Chart components via code to display a bar chart of the 2 year stock data as part of the StockServer class which you can find in the sample download. The ability to return arbitrary data from a service is useful as you can see - in this case the chart is clearly associated with the service and it's nice that the graph generation can happen off a handler rather than through a page. Images are common resources, but output can also be PDF reports, zip files for downloads etc. which is becoming increasingly more common to be returned from REST endpoints and other applications. Why reinvent? Obviously the examples I've shown here are pretty basic in terms of functionality. But I hope they demonstrate the core features of AJAX callbacks that you need to work through in most applications which is simple: return data, send back data and potentially retrieve data in various formats. While there are other solutions when it comes down to making AJAX callbacks and servicing REST like requests, I like the flexibility my home grown solution provides. Simply put it's still the easiest solution that I've found that addresses my common use cases: AJAX JSON RPC style callbacks Url based access XML and JSON Output from single method endpoint XML and JSON POST support, querystring input, routing parameter mapping UrlEncoded POST data support on callbacks Ability to return stream/raw string data Essentially ability to return ANYTHING from Service and pass anything All these features are available in various solutions but not together in one place. I've been using this code base for over 4 years now in a number of projects both for myself and commercial work and it's served me extremely well. Besides the AJAX functionality CallbackHandler provides, it's also an easy way to create any kind of output endpoint I need to create. Need to create a few simple routines that spit back some data, but don't want to create a Page or View or full blown handler for it? Create a CallbackHandler and add a method or multiple methods and you have your generic endpoints.  It's a quick and easy way to add small code pieces that are pretty efficient as they're running through a pretty small handler implementation. I can have this up and running in a couple of minutes literally without any setup and returning just about any kind of data. Resources Download the Sample NuGet: Westwind Web and AJAX Utilities (Westwind.Web) ajaxCallMethod() Documentation Using the AjaxMethodCallback WebForms Control West Wind Web Toolkit Home Page West Wind Web Toolkit Source Code © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  jQuery  AJAX   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • Our Oracle Recruitment Team is Growing - Multiple Job Opportunities in Bangalore, India

    - by david.talamelli
    DON"T GET STUCK IN THE MATRIXSEE YOUR FUTUREVISIT THE ORACLE The position(s): CORPORATE RECRUITING RESEARCH ANALYST(S) ABOUT ORACLE Oracle's business is information--how to manage it, use it, share it, protect it. For three decades, Oracle, the world's largest enterprise software company, has provided the software and services that allow organizations to get the most up-to-date and accurate information from their business systems. Only Oracle powers the information-driven enterprise by offering a complete, integrated solution for every segment of the process industry. When you run Oracle applications on Oracle technology, you speed implementation, optimize performance, and maximize ROI. Great hiring doesn't happen by accident; it's the culmination of a series of thoughtfully planned and well executed events. At the core of any hiring process is a sourcing strategy. This is where you come in... Do you want to be a part of a world-class recruiting organization that's on the cutting edge of technology? Would you like to experience a rewarding work environment that allows you to further develop your skills, while giving you the opportunity to develop new skills? If you answered yes, you've taken your first step towards a future with Oracle. We are building a Research Team to support our North America Recruitment Team, and we need creative, smart, and ambitious individuals to help us drive our research department forward. Oracle has a track record for employing and developing the very best in the industry. We invest generously in employee development, training and resources. Be a part of the most progressive internal recruiting team in the industry. For more information about Oracle, please visit our Web site at http://www.oracle.com Escape the hum drum job world matrix, visit the Oracle and be a part of a winning team, apply today. POSITION: Corporate Recruiting Research Analyst LOCATION: Bangalore, India RESPONSIBILITIES: •Develop candidate pipeline using Web 2.0 sourcing strategies and advanced Boolean Search techniques to support U.S. Recruiting Team for various job functions and levels. •Engage with assigned recruiters to understand the supported business as well as the recruiting requirements; partner with recruiters to meet expectations and deliver a qualified pipeline of candidates. •Source candidates to include both active and passive job seekers to provide a strong pipeline of qualified candidates for each recruiter; exercise creativity to find candidates using Oracle's advanced sourcing tools/techniques. •Fully evaluate candidate's background against the requirements provided by recruiter, and process leads using ATS (Applicant Tracking System). •Manage your efforts efficiently; maintain the highest levels of client satisfaction as well as strong operations and reporting of research activities. PREFERRED QUALIFICATIONS: •Fluent in English, with excellent written and oral communication skills. •Undergraduate degree required, MBA or Masters preferred. •Proficiency with Boolean Search techniques desired. •Ability to learn new software applications quickly. •Must be able to accommodate some U.S. evening hours. •Strong organization and attention to detail skills. •Prior HR or corporate in-house recruiting experiences a plus. •The fire in the belly to learn new ideas and succeed. •Ability to work in team and individual environments. This is an excellent opportunity to join Oracle in our Bangalore Offices. Interested applicants can send their resume to [email protected] or contact David on +61 3 8616 3364

    Read the article

  • Designing an email system to guarantee delivery

    - by GlenH7
    We are looking to expand our use of email for notification purposes. We understand it will generate more inbox volume, but we are being selective about which events we fire notification on in order to keep the signal-to-noise ratio high. The big question we are struggling with is designing a system that guarantees that the email was delivered. If an email isn't delivered, we will consider that an exception event that needs to be investigated. In reality, I say almost guarantees because there aren't any true guarantees with email. We're just looking for a practical solution to making sure the email got there and experiences others have had with the various approaches to guaranteeing delivery. For the TL;DR crowd - how do we go about designing a system to guarantee delivery of emails? What techniques should we consider so we know the emails were delivered? Our biggest area of concern is what techniques to use so that we know when a message is sent out that it either lands in an inbox or it failed and we need to do something else. Additional requirements: We're not at the stage of including an escalation response, but we'll want that in the future or so we think. Most notifications will be internal to our enterprise, but we will have some notifications being sent to external clients. Some of our application is in a hosted environment. We haven't determined if those servers can access our corporate email servers for relaying or if they'll be acting as their own mail servers. Base design / modules (at the moment): A module to assign tracking identification A module to send out emails A module to receive delivery notification (perhaps this is the same as the email module) A module that checks sent messages against delivery notification and alerts on undelivered email. Some references: Atwood: Send some email Email Tracking Some approaches: Request a response (aka read-receipt or Message Disposition Notification). Seems prone to failure since we have cross-compatibility issues due to differing mail servers and software. Return receipt (aka Delivery Status Notification). Not sure if all mail servers honor this request or not Require an action and therefore prove reply. Seems burdensome to force the recipients to perform an additional task not related to resolving the issue. And no, we haven't come up with a way of linking getting the issue fixed to whether or not the email was received. Force a click-through / Other site sign-in. Similar to requiring some sort of action, this seems like an additional burden and will annoy the users. On the other hand, it seems the most likely to guarantee someone received the notification. Hidden image tracking. Not all email providers automatically load the image, and how would we associate the image(s) with the email tracking ID? Outsource delivery. This gets us out of the email business, but goes back to how to guarantee the out-sourcer's receipt and subsequent delivery to the end recipient. As a related concern, there will be an n:n relationship between issue notification and recipients. The 1 issue : n recipients subset isn't as much of a concern although if we had a delivery failure we would want to investigate and fix the core issue. Of bigger concern is n issues : 1 recipient, and we're specifically concerned in making sure that all n issues were received by the recipient. How does forum software or issue tracking software handle this requirement? If a tracking identifier is used, Where is it placed in the email? In the Subject, or the Body?

    Read the article

  • Displaying JSON in your Browser

    - by Rick Strahl
    Do you work with AJAX requests a lot and need to quickly check URLs for JSON results? Then you probably know that it’s a fairly big hassle to examine JSON results directly in the browser. Yes, you can use FireBug or Fiddler which work pretty well for actual AJAX requests, but if you just fire off a URL for quick testing in the browser you usually get hit by the Save As dialog and the download manager, followed by having to open the saved document in a text editor in FireFox. Enter JSONView which allows you to simply display JSON results directly in the browser. For example, imagine I have a URL like this: http://localhost/westwindwebtoolkitweb/RestService.ashx?Method=ReturnObject&format=json&Name1=Rick&Name2=John&date=12/30/2010 typed directly into the browser and that that returns a complex JSON object. With JSONView the result looks like this: No fuss, no muss. It just works. Here the result is an array of Person objects that contain additional address child objects displayed right in the browser. JSONView basically adds content type checking for application/json results and when it finds a JSON result takes over the rendering and formats the display in the browser. Note that it re-formats the raw JSON as well for a nicer display view along with collapsible regions for objects. You can still use View Source to see the raw JSON string returned. For me this is a huge time-saver. As I work with AJAX result data using GET and REST style URLs quite a bit it’s a big timesaver. To quickly and easily display JSON is a key feature in my development day and JSONView for all its simplicity fits that bill for me. If you’re doing AJAX development and you often review URL based JSON results do yourself a favor and pick up a copy of JSONView. Other Browsers JSONView works only with FireFox – what about other browsers? Chrome Chrome actually displays raw JSON responses as plain text without any plug-ins. There’s no plug-in or configuration needed, it just works, although you won’t get any fancy formatting. [updated from comments] There’s also a port of JSONView available for Chrome from here: https://chrome.google.com/webstore/detail/chklaanhfefbnpoihckbnefhakgolnmc It looks like it works just about the same as the JSONView plug-in for FireFox. Thanks for all that pointed this out… Internet Explorer Internet Explorer probably has the worst response to JSON encoded content: It displays an error page as it apparently tries to render JSON as XML: Yeah that seems real smart – rendering JSON as an XML document. WTF? To get at the actual JSON output, you can use View Source. To get IE to display JSON directly as text you can add a Mime type mapping in the registry:   Create a new application/json key in: HKEY_CLASSES_ROOT\MIME\Database\ContentType\application/json Add a string value of CLSID with a value of {25336920-03F9-11cf-8FD0-00AA00686F13} Add a DWORD value of Encoding with a value of 80000 I can’t take credit for this tip – found it here first on Sky Sander’s Blog. Note that the CLSID can be used for just about any type of text data you want to display as plain text in the IE. It’s the in-place display mechanism and it should work for most text content. For example it might also be useful for looking at CSS and JS files inside of the browser instead of downloading those documents as well. © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  AJAX  

    Read the article

  • MongoDB usage best practices

    - by andresv
    The project I'm working on uses MongoDB for some stuff so I'm creating some documents to help developers speedup the learning curve and also avoid mistakes and help them write clean & reliable code. This is my first version of it, so I'm pretty sure I will be adding more stuff to it, so stay tuned! C# Official driver notes The 10gen official MongoDB driver should always be referenced in projects by using NUGET. Do not manually download and reference assemblies in any project. C# driver quickstart guide: http://www.mongodb.org/display/DOCS/CSharp+Driver+Quickstart Reference links C# Language Center: http://www.mongodb.org/display/DOCS/CSharp+Language+Center MongoDB Server Documentation: http://www.mongodb.org/display/DOCS/Home MongoDB Server Downloads: http://www.mongodb.org/downloads MongoDB client drivers download: http://www.mongodb.org/display/DOCS/Drivers MongoDB Community content: http://www.mongodb.org/display/DOCS/CSharp+Community+Projects Tutorials Tutorial MongoDB con ASP.NET MVC - Ejemplo Práctico (Spanish):http://geeks.ms/blogs/gperez/archive/2011/12/02/tutorial-mongodb-con-asp-net-mvc-ejemplo-pr-225-ctico.aspx MongoDB and C#:http://www.codeproject.com/Articles/87757/MongoDB-and-C C# driver LINQ tutorial:http://www.mongodb.org/display/DOCS/CSharp+Driver+LINQ+Tutorial C# driver reference: http://www.mongodb.org/display/DOCS/CSharp+Driver+Tutorial Safe Mode Connection The C# driver supports two connection modes: safe and unsafe. Safe connection mode (only applies to methods that modify data in a database like Inserts, Deletes and Updates. While the current driver defaults to unsafe mode (safeMode == false) it's recommended to always enable safe mode, and force unsafe mode on specific things we know aren't critical. When safe mode is enabled, the driver internal code calls the MongoDB "getLastError" function to ensure the last operation is completed before returning control the the caller. For more information on using safe mode and their implicancies on performance and data reliability see: http://www.mongodb.org/display/DOCS/getLastError+Command If safe mode is not enabled, all data modification calls to the database are executed asynchronously (fire & forget) without waiting for the result of the operation. This mode could be useful for creating / updating non-critical data like performance counters, usage logging and so on. It's important to know that not using safe mode implies that data loss can occur without any notification to the caller. As with any wait operation, enabling safe mode also implies dealing with timeouts. For more information about C# driver safe mode configuration see: http://www.mongodb.org/display/DOCS/CSharp+getLastError+and+SafeMode The safe mode configuration can be specified at different levels: Connection string: mongodb://hostname/?safe=true Database: when obtaining a database instance using the server.GetDatabase(name, safeMode) method Collection: when obtaining a collection instance using the database.GetCollection(name, safeMode) method Operation: for example, when executing the collection.Insert(document, safeMode) method Some useful SafeMode article: http://stackoverflow.com/questions/4604868/mongodb-c-sharp-safemode-official-driver Exception Handling The driver ensures that an exception will be thrown in case of something going wrong, in case of using safe mode (as said above, when not using safe mode no exception will be thrown no matter what the outcome of the operation is). As explained here https://groups.google.com/forum/?fromgroups#!topic/mongodb-user/mS6jIq5FUiM there is no need to check for any returned value from a driver method inserting data. With updates the situation is similar to any other relational database: if an update command doesn't affect any records, the call will suceed anyway (no exception thrown) and you manually have to check for something like "records affected". For MongoDB, an Update operation will return an instance of the "SafeModeResult" class, and you can verify the "DocumentsAffected" property to ensure the intended document was indeed updated. Note: Please remember that an Update method might return a null instance instead of an "SafeModeResult" instance when safe mode is not enabled. Useful Community Articles Comments about how MongoDB works and how that might affect your application: http://ethangunderson.com/blog/two-reasons-to-not-use-mongodb/ FourSquare using MongoDB had serious scalability problems: http://mashable.com/2010/10/07/mongodb-foursquare/ Is MongoDB a replacement for Memcached? http://www.quora.com/Is-MongoDB-a-good-replacement-for-Memcached/answer/Rick-Branson MongoDB Introduction, shell, when not to use, maintenance, upgrade, backups, memory, sharding, etc: http://www.markus-gattol.name/ws/mongodb.html MongoDB Collection level locking support: https://jira.mongodb.org/browse/SERVER-1240 MongoDB performance tips: http://www.quora.com/MongoDB/What-are-some-best-practices-for-optimal-performance-of-MongoDB-particularly-for-queries-that-involve-multiple-documents Lessons learned migrating from SQL Server to MongoDB: http://www.wireclub.com/development/TqnkQwQ8CxUYTVT90/read MongoDB replication performance: http://benshepheard.blogspot.com.ar/2011/01/mongodb-replication-performance.html

    Read the article

  • Ask the Readers: Backing Your Files Up – Local Storage versus the Cloud

    - by Asian Angel
    Backing up important files is something that all of us should do on a regular basis, but may not have given as much thought to as we should. This week we would like to know if you use local storage, cloud storage, or a combination of both to back your files up. Photo by camknows. For some people local storage media may be the most convenient and/or affordable way to back up their files. Having those files stored on media under your control can also provide a sense of security and peace of mind. But storing your files locally may also have drawbacks if something happens to your storage media. So how do you know whether the benefits outweigh the disadvantages or not? Here are some possible pros and cons that may affect your decision to use local storage to back up your files: Local Storage Pros You are in control of your data Your files are portable and can go with you when needed if using external or flash drives Files are accessible without an internet connection You can easily add more storage capacity as needed (additional drives, etc.) Cons You need to arrange room for your storage media (if you have multiple externals drives, etc.) Possible hardware failure No access to your files if you forget to bring your storage media with you or it is too bulky to bring along Theft and/or loss of home with all contents due to circumstances like fire If you are someone who is always on the go and needs to travel as lightly as possible, cloud storage may be the perfect way for you to back up and access your files. Perhaps your laptop has a hard-drive failure or gets stolen…unhappy events to be sure, but you will still have a copy of your files available. Perhaps a company wants to make sure their records, files, and other information are backed up off site in case of a major hardware or system failure…expensive and/or frustrating to fix if it happens, but once again there is a nice backup ready to go once things are fixed. As with local storage, here are some possible pros and cons that may influence your choice of cloud storage to back up your files: Cloud Storage Pros No need to carry around flash or bulky external drives All of your files are accessible wherever there is an internet connection No need to deal with local storage media (or its’ upkeep) Your files are still safe if your home is broken into or other unfortunate circumstances occur Cons Your files and data are not 100% under your control Possible hardware failure or loss of files on the part of your cloud storage provider (this could include a disgruntled employee wreaking havoc) No access to your files if you do not have an internet connection The cloud storage provider may eventually shutdown due to financial hardship or other unforeseen circumstances The possibility of your files and data being stolen by hackers due to a security breach on the part of your cloud storage provider You may also prefer to try and cover all of the possibilities by using both local and cloud storage to back up your files. If something happens to one, you always have the other to fall back on. Need access to those files at or away from home? As long as you have access to either your storage media or an internet connection, you are good to go. Maybe you are getting ready to choose a backup solution but are not sure which one would work better for you. Here is your chance to ask your fellow HTG readers which one they would recommend. Got a great backup solution already in place? Then be sure to share it with your fellow readers! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know Winter Sunset by a Mountain Stream Wallpaper Add Sleek Style to Your Desktop with the Aston Martin Theme for Windows 7 Awesome WebGL Demo – Flight of the Navigator from Mozilla Sunrise on the Alien Desert Planet Wallpaper Add Falling Snow to Webpages with the Snowfall Extension for Opera [Browser Fun] Automatically Keep Up With the Latest Releases from Mozilla Labs in Firefox 4.0

    Read the article

  • Google TV Gets Bad Reception. Can Media Center Pull in the Signal?

    - by andrewbrust
    The news hit Monday morning that Google has decided to delay the release of its Google TV platform, and has asked its OEMs to delay any products that embed the software.  Coming just about two weeks prior to the 2011 Consumer Electronics Show (CES), Google’s timing is about the worst imaginable.  CES is where the platform should have had its coming out party, especially given all the anticipation that has built up since its initial announcement came 7 months ago. At last year’s CES, it seemed every consumer electronics company had fashioned its own software stack for Internet-based video programming and applications/widgets on its TVs, optical disc players and set top boxes.  In one case, I even saw two platforms on a single TV set (one provided by Yahoo and the other one native to the TV set). The whole point of Google TV was to solve this problem and offer a standard, embeddable platform.  But that won’t be happening, at least not for a while.  Google seems unable to get it together, and more proprietary approaches, like Apple TV, don’t seem to be setting the world of TV-Internet convergence on fire, either. It seems to me, that when it comes to building a “TV operating system,” Windows Media Center is still the best of a bad bunch.  But it won’t stay so for much longer without some changes.  Will Redmond pick up the ball that Google has fumbled?  I’m skeptical, but hopeful.  Regardless, here are some steps that could help Microsoft make the most of Google’s faux pas: Introduce a new Media Center version that uses XBox 360, rather than Windows 7 (or 8), as the platform.  TV platforms should be appliance-like, not PC-like.  Combine that notion with the runaway sales numbers for Xbox 360 Kinect, and the mass appeal it has delivered for Xbox, and the switch form Windows makes even more sense. As I have pointed out before, Microsoft’s Xbox implementation of its Mediaroom platform (announced and demoed at last year’s CES) gets Redmond 80% of the way toward this goal.  Nothing stops Microsoft from going the other 20%, other than its own apathy, which I hope has dissipated. Reverse the decision to remove Drive Extender technology from Windows Home Server (WHS), and create deep integration between WHS and Media Center.  I have suggested this previously as well, but the recent announcement that Drive Extender would be dropped from WHS 2.0 creates the need for me to a) join the chorus of people urging Microsoft to reconsider and b) reiterate the importance of Media Center-WHS integration in the context of a Google compete scenario. Enable Windows Phone 7 (WP7) as a Media Center client.  This would tighten the integration loop already established between WP7, Xbox and Zune.  But it would also counter Echostar/DISH Network/Sling Media, strike a blow against Google/Android (and even Apple/iOS) and could be the final strike against TiVO. Bring the WP7 user interface to Media Center and Kinect-enable it.  This would further the integration discussed above and would be appropriate recognition of WP7’s Metro UI having been built on the heritage of the original Media Center itself.  And being able to run your DVR even if you can’t find the remote (or can’t see its buttons in the dark) could be a nifty gimmick. Microsoft can do this but its consumer-oriented organization, responsible for Xbox, Zune and WP7, has to take the reins here, or none of this will likely work.  There’s a significant chance that won’t happen, but I won’t let that stop me from hoping that it does and insisting that it must.  Honestly, this fight is Microsoft’s to lose.

    Read the article

  • The dislikes of TDD

    - by andrewstopford
    I enjoy debates about TDD and Brian Harrys blog post is no exception. Brian sounds out what he likes and dislikes about TDD and it's the dislikes I'll focus on. The idea of having unit tests that cover virtually every line of code that I’ve written that I have to refactor every time I refactor my code makes me shudder.  Doing this way makes me take nearly twice as long as it would otherwise take and I don’t feel like I get sufficient benefits from it. Refactoring your tests to match your refactored code sounds like the tests are suffering. Too many hard dependencies with no SOLID concerns are a sure fire reason you would do this. Maybe at the start of a TDD cycle you would need to do this as your design evolves and you remove these dependencies but this should quickly be resolved as you refactor. If you find your self still doing it then stop and look back at your design. Don’t get me wrong, I’m a big fan of unit tests.  I just prefer to write them after the code has stopped shaking a bit.  In fact most of my early testing is “manual”.  Either I write a small UI on top of my service that allows me to plug in values and try it or write some quick API tests that I throw away as soon as I have validated them. The problem with this is that a UI can make assumptions on your code that then just unit test around and very quickly the design becomes bad and you technical debt sweeps in. If you want to blackbox test your code with a UI then do so after your TDD cycles not before. This is probably by biggest issue with a literal TDD interpretation.  TDD says you never write a line of code without a failing test to show you need it.  I find it leads developers down a dangerous path.  Without any help from a methodology, I have met way too many developers in my life that “back into a solution”.  By this, I mean they write something, it mostly works and they discover a new requirement so they tack it on, and another and another and when they are done, they’ve got a monstrosity of special cases each designed to handle one specific scenario.  There’s way more code than there should be and it’s way too complicated to understand. I believe in finding general solutions to problems from which all the special cases naturally derive rather than building a solution of special cases.  In my mind, to do this, you have to start by conceptualizing and coding the framework of the general algorithm.  For me, that’s a relatively monolithic exercise. TDD is an development pratice not a methodology, the danger is that the solution becomes a mass of different things that violate DRY. TDD won't solve these problems, only good communication and practices like pairing will help. Above all else an assumption that TDD replaces a methodology is a mistake, combine it with what ever works for your team\business but only good communication will help. A good naming scheme\structure for folders, files and tests can help you and your team isolate what tests are for what.

    Read the article

  • Silverlight Cream for January 26, 2011 -- #1036

    - by Dave Campbell
    In this all-submittal Issue: XamlNinja, Kevin Dockx, Steve Wortham, Andrea Boschin, Mick Norman, Colin Eberhardt, and Rudi Grobler(-2-, -3-, -4-, -5-). Above the Fold: Silverlight: "Getting an invalid cross-thread exception in Silverlight?" Kevin Dockx WP7: "WP7 Contrib – the last messenger" XamlNinja ISO: "How many files are too many files for isolated storage?" Mick Norman Shoutouts: Telerik announced a free WP7 Webinars series that you probably don't want to miss: Join Us for the Special Free Windows Phone 7 Webinars Series. Guest lecturers - Shawn Wildermuth and Mark Arteaga From SilverlightCream.com: WP7 Contrib – the last messenger XamlNinja has a great post up extending Laurent's IMessenger to deal with a tricky issue of trying to fire a message from one VM to another even if the 2nd VM isn't alive yet... oh, and this is in WP7Contrib, so go grab it! Getting an invalid cross-thread exception in Silverlight? Kevin Dockx has a solution to a problem we've all had... the 'invalid cross-thread exception' ... and the solution is even for those of us trying to do this in a VM... cool and easy solution, Kevin! Mastering Storyboards One Mistake at a Time Steve Wortham is back with a tutorial with a great title :) ... check out the progression from one success to another in this picture/title viewer ... don't miss the very end where he has the control rolled up into a CaptionedImageHyperlink, and a link to download it! Windows Phone 7 - Part #2: Your First Application Andrea Boschin has part 2 of his SilverlightShow WP7 series up. Lots of good intro material here on the manifest file and app.xaml ... he even gets into the ApplicationBar, phone orientation, and the Metro theme. How many files are too many files for isolated storage? Mick Norman alerted me to his blog early this morning, and this is his latest post... interesting tests of how many files are too many for ISO on your WP7... and I have to admit... he's stuffing a boatload of them out there in these tests! ... great info Mick! and thanks for the links. A Navigator Control For Visiblox Time Series Charts Colin Eberhardt's latest post is about creating an interactive navigator for large time series datasets in Visiblox charts.... check the images at the top of the post, and it'll be obvious :) ... very cool stuff. MVVM Frameworks with WP7 support Rudi Grobler has been very busy and if you check the dates, these posts are all in a day or two! This first highlights two contenders for MVVM on WP7: Caliburn and MVVMLight... both well-supported... quick intro to each followed by good links out to the author's sites Reading barcodes from your WP7 device Rudi Grobler also has a cool post up on reading barcodes with your WP7... he's using the ZXing Barcode Scanning Library, and makes quick work of the job. Taking Sterling for a Test-Drive Rudi Grobler has a quick intro to Sterlink, Jeremy Likness' ISO database for Silverlight up... quickly taking care of writing and reading back data. SQLite on WP7 After his discussion of Sterling, Rudi Grobler is now demonstrating the use of SQLite that has been ported to WP7. Check out his demo code... looks pretty easy to use. Hacking the WP7 Camera (The basics) Rudi Grobler's latest post is on getting direct access to the camera on WP7... be sure to do all the downloads and check out the external links he has. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • asynchrony is viral

    - by Daniel Moth
    It is becoming hard to write code today without introducing some form of asynchrony and, if you are using .NET (e.g. for Windows Phone 8 or Windows Store apps), that means sooner or later you have to await something and mark your method as async. My most recent examples included introducing speech recognition in my Translator By Moth phone app where I had to await mySpeechRecognizerUI.RecognizeWithUIAsync() and when moving that code base to a Windows Store project just to show a MessageBox I had to await myMessageDialog.ShowAsync(). Any time you need to invoke an asynchronous method in your code, you have a choice to make: kick off the operation but don’t wait for it to complete (otherwise known as fire-and-forget), synchronously wait for it to complete (which will entail blocking, which can be bad, especially on a UI thread), or asynchronously wait for it to complete before continuing on with the rest of the method’s work. In most cases, you want the latter, and the await keyword makes that trivial to implement.  When you use the magical await keyword in front of an API call, then you typically have to make additional changes to your code: This await usage is within a method of course, and now you have to annotate that method with async. Furthermore, you have to change the return type of the method you just annotated so it returns a Task (if it previously returned void), or Task<myOldReturnType> (if it previously returned myOldReturnType). Note that if it returns void, in some cases you could cheat and stop there. Furthermore, any method that called this method you just annotated with async will now also be invoking an asynchronous operation, so you have to make that change in the body of the caller method to introduce the await keyword before the call to the method. …you guessed it, you now have to change this caller method to be annotated with async and have its return types tweaked... …and it goes on virally… At some point you reach the root of your user code, e.g. a GUI event handler, and whoever calls that void method can already deal with the fact that you marked it as async and the viral introduction of the keywords stops there… This is all wonderful progress and a very powerful mechanism, and I just wish someone had written a refactoring tool to take care of this… anyone? I mentioned earlier that you have a choice when invoking an asynchronous operation. If the first time you encounter this you wish to localize the impact of all these changes and essentially try to turn the asynchronous behavior into synchronous by blocking - don't! For reasons why you don't want to do that, read Toub's excellent blog post (and check out the rest of his blog with gems on async programming starting with the Async FAQ). Just embrace the pattern knowing that when you use one instance of an await, you'll propagate the change all the way to the root user code method, e.g. typically an event handler. Related aside: I just finished re-writing my MessageBox wrapper class for Phone projects, including making it work in Windows Store projects, and it does expect you to use it with an await :-). I'll share that in an upcoming post for those of you that have the same need… Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • WCF REST Service Activation Errors when AspNetCompatibility is enabled

    - by Rick Strahl
    I’m struggling with an interesting problem with WCF REST since last night and I haven’t been able to track this down. I have a WCF REST Service set up and when accessing the .SVC file it crashes with a version mismatch for System.ServiceModel: Server Error in '/AspNetClient' Application. Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.TypeLoadException: Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [TypeLoadException: Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.] System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMarkHandle stackMark, Boolean loadTypeFromPartialName, ObjectHandleOnStack type) +0 System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, Boolean loadTypeFromPartialName) +95 System.RuntimeType.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) +54 System.Type.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +65 System.Web.Compilation.BuildManager.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +69 System.Web.Configuration.HandlerFactoryCache.GetTypeWithAssert(String type) +38 System.Web.Configuration.HandlerFactoryCache.GetHandlerType(String type) +13 System.Web.Configuration.HandlerFactoryCache..ctor(String type) +19 System.Web.HttpApplication.GetFactory(String type) +81 System.Web.MaterializeHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +223 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +184 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.1 What’s really odd about this is that it crashes only if it runs inside of IIS (it works fine in Cassini) and only if ASP.NET Compatibility is enabled in web.config:<serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> Arrrgh!!!!! After some experimenting and some help from Glenn Block and his team mates I was able to track down the problem in ApplicationHost.config. Specifically the problem was that there were multiple *.svc mappings in the ApplicationHost.Config file and the older 2.0 runtime specific versions weren’t marked for the proper runtime. Because these handlers show up at the top of the list they execute first resulting in assembly load errors for the wrong version assembly. To fix this problem I ended up making a couple changes in applicationhost.config. On the machine level root’s Handler mappings I had an entry that looked like this:<add name="svc-Integrated" path="*.svc" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" preCondition="integratedMode" /> and it needs to be changed to this:<add name="svc-Integrated" path="*.svc" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" preCondition="integratedMode,runtimeVersionv2.0" />Notice the explicit runtime version assignment in the preCondition attribute which is key to keep ASP.NET 4.0 from executing that handler. The key here is that the runtime version needs to be set explicitly so that the various *.svc handlers don’t fire only in the order defined which in case of a .NET 4.0 app with the original setting would result in an incompatible version of System.ComponentModel to load.What was really hard to track this down is that even when looking in the debugger when launching the Web app, the AppDomain assembly loads showed System.ServiceModel V4.0 starting up just fine. Apparently the ASP.NET runtime load occurs at a different point and that’s when things break.So how did this break? According to the Microsoft folks it’s some older tools that got installed that change the default service handlers. There’s a blog entry that points at this problem with more detail:http://blogs.iis.net/webtopics/archive/2010/04/28/system-typeloadexception-for-system-servicemodel-activation-httpmodule-in-asp-net-4.aspxNote that I tried running aspnet_regiis and that did not fix the problem for me. I had to manually change the entries in applicationhost.config.   © Rick Strahl, West Wind Technologies, 2005-2011Posted in AJAX   ASP.NET  WCF  

    Read the article

  • Backing up my Windows Home Server to the Cloud&hellip;

    - by eddraper
    Ok, here’s my scenario: Windows Home Server with a little over 3TB of storage.  This includes many years of our home network’s PC backups, music, videos, etcetera. I’d like to get a backup off-site, and the existing APIs and apps such as CloudBerry Labs WHS Backup service are making it easy.  Now, all it’s down to is vendor and the cost of the actual storage.   So,  I thought I’d take a lazy Saturday morning and do some research on this and get the ball rolling.  What I discovered stunned me…   First off, the pricing for just about everything was loaded with complexity.  I learned that it wasn’t just about storage… it was about network usage, requests, sites, replication, and on and on. I really don’t see this as rocket science.  I have a disk image.  I want to put it in the cloud.  I’m not going to be be using it but once daily for incremental backups.  Sounds like a common scenario.  Yes, if “things get real” and my server goes down, I will need to bring down a lot of data and utilize a fair amount of vendor infrastructure.  However, this may never happen.  Offsite storage is an insurance policy.   The complexity of the cost structures, perhaps by design, create an environment where it’s incredibly hard to model bottom line costs and compare vendor all-up pricing.  As it is a “lazy Saturday morning,” I’m not in the mood for such antics and I decide to shirk the endeavor entirely.  Thus, I decided to simply fire up calc.exe and do some a simple arithmetic model based on price per GB.  I shuddered at the results.  Certainly something was wrong… did I misplace a decimal point?  Then I discovered CloudBerry’s own calculator.   Nope, I hadn’t misplaced those decimals after all.  Check it out (pricing based on 3174 GB):   Amazon S3 $398.00 per month $4761 per year Azure $396.75 per month $4761 per year Google $380.88 per month $4570.56 per year   Conclusion: Rampant crack smoking at vendors.  Seriously.  Out. Of. Their. Minds. Now, to Amazon’s credit, vision, and outright common sense, they had one offering which directly addresses my scenario:   Amazon Glacier $31.74 per month $380.88 per year   hmmm… It’s on the table.  Let’s see what it would cost to just buy some drives, an enclosure and cart them over to a friend’s house.   2 x 2TB Drives from NewEgg.com $199.99   Enclosure $39.99     $239.98   Carting data to back and forth to friend’s within walking distance pain   Leave drive unplugged at friend’s $0 for electricity   Possible data loss No way I can come and go every day.     I think I’ll think on this a bit more…

    Read the article

  • ASP.NET Routing not working on IIS 7.0

    - by Rick Strahl
    I ran into a nasty little problem today when deploying an application using ASP.NET 4.0 Routing to my live server. The application and its Routing were working just fine on my dev machine (Windows 7 and IIS 7.5), but when I deployed (Windows 2008 R1 and IIS 7.0) Routing would just not work. Every time I hit a routed url IIS would just throw up a 404 error: This is an IIS error, not an ASP.NET error so this doesn’t actually come from ASP.NET’s routing engine but from IIS’s handling of expressionless URLs. Note that it’s clearly falling through all the way to the StaticFile handler which is the last handler to fire in the typical IIS handler list. In other words IIS is trying to parse the extension less URL and not firing it into ASP.NET but failing. As I mentioned on my local machine this all worked fine and to make sure local and live setups match I re-copied my Web.config, double checked handler mappings in IIS and re-copied the actual application assemblies to the server. It all looked exactly matched. However no workey on the server with IIS 7.0!!! Finally, totally by chance, I remembered the runAllManagedModulesForAllRequests attribute flag on the modules key in web.config and set it to true: <system.webServer> <modules runAllManagedModulesForAllRequests="true"> <add name="ScriptCompressionModule" type="Westwind.Web.ScriptCompressionModule,Westwind.Web" /> </modules> </system.webServer> And lo and behold, Routing started working on the live server and IIS 7.0! This seems really obvious now of course, but the really tricky thing about this is that on IIS 7.5 this key is not necessary. So on my Windows 7 machine ASP.NET Routing was working just fine without the key set. However on IIS 7.0 on my live server the same missing setting was not working. On IIS 7.0 this key must be present or Routing will not work. Oddly on IIS 7.5 it appears that you can’t even turn off the behavior – setting runtAllManagedModuleForAllRequests="false" had no effect at all and Routing continued to work just fine even with the flag set to false, which is NOT what I would have expected. Kind of disappointing too that Windows Server 2008 (R1) can’t be upgraded to IIS 7.5. It sure seems like that should have been possible since the OS server core changes in R2 are pretty minor. For the future I really hope Microsoft will allow updating IIS versions without tying them explicitly to the OS. It looks like that with the release of IIS Express Microsoft has taken some steps to untie some of those tight OS links from IIS. Let’s hope that’s the case for the future – it sure is nice to run the same IIS version on dev and live boxes, but upgrading live servers is too big a deal to do just because an updated OS release came out. Moral of the story – never assume that your dev setup will work as is on the live setup. It took me forever to figure this out because I assumed that because my web.config on the local machine was fine and working and I copied all relevant web.config data to the server it can’t be the configuration settings. I was looking everywhere but in the .config file forever before getting desperate and remembering the flag when I accidentally checked the intellisense settings in the modules key. Never assume anything. The other moral is: Try to keep your dev machine and server OS’s in sync whenever possible. Maybe it’s time to upgrade to Windows Server 2008 R2 after all. More info on Extensionless URLs in IIS Want to find out more exactly on how extensionless Urls work on IIS 7? The check out  How ASP.NET MVC Routing Works and its Impact on the Performance of Static Requests which goes into great detail on the complexities of the process. Thanks to Jeff Graves for pointing me at this article – a great linked reference for this topic!© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7  Windows  

    Read the article

  • New January 2013 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I am super excited to announce the January 2013 release of the Ajax Control Toolkit! I have one word to describe this release and that word is “Charts” – we’ve added lots of great new chart controls to the Ajax Control Toolkit. You can download the new release directly from http://AjaxControlToolkit.CodePlex.com – or, just fire the following command from the Visual Studio Library Package Manager Console Window (NuGet): Install-Package AjaxControlToolkit You also can view the new chart controls by visiting the “live” Ajax Control Toolkit Sample Site. 5 New Ajax Control Toolkit Chart Controls The Ajax Control Toolkit contains five new chart controls: the AreaChart, BarChart, BubbleChart, LineChart, and PieChart controls. Here is a sample of each of the controls: AreaChart: BarChart: BubbleChart: LineChart: PieChart: We realize that people love to customize the appearance of their charts so all of the chart controls include properties such as color properties. The chart controls render the chart on the browser using SVG. The chart controls are compatible with any browser which supports SVG including Internet Explorer 9 and new and recent versions of Google Chrome, Mozilla Firefox, and Apple Safari. (If you attempt to display a chart on a browser which does not support SVG then you won’t get an error – you just won’t get anything). Updates to the HTML Sanitizer If you are using the HtmlEditorExtender on a public-facing website then it is really important that you enable the HTML Sanitizer to prevent Cross-Site Scripting (XSS) attacks. The HtmlEditorExtender uses the HTML Sanitizer by default. The HTML Sanitizer strips out any suspicious content (like JavaScript code and CSS expressions) from the HTML submitted with the HtmlEditorExtender. We followed the recommendations of OWASP and ha.ckers.org to identify suspicious content. We updated the HTML Sanitizer with this release to protect against new types of XSS attacks. The HTML Sanitizer now has over 220 unit tests. The Ajax Control Toolkit team would like to thank Gil Cohen who helped us identify and block additional XSS attacks. Change in Ajax Control Toolkit Version Format We ran out of numbers. The Ajax Control Toolkit was first released way back in 2006. In previous releases, the version of the Ajax Control Toolkit followed the format: Release Year + Date. So, the previous release was 60919 where 6 represented the 6th release year and 0919 represent September 19. Unfortunately, the AssembyVersion attribute uses a UInt16 data type which has a maximum size of 65,534. The number 70123 is bigger than 65,534 so we had to change our version format with this release. Fortunately, the AssemblyVersion attribute actually accepts four UInt16 numbers so we used another one. This release of the Ajax Control Toolkit is officially version 7.0123. This new version format should work for another 65,000 years. And yes, I realize that 7.0123 is less than 60,919, but we ran out of numbers. Summary I hope that you find the chart controls included with this latest release of the Ajax Control Toolkit useful. Let me know if you use them in applications that you build. And, let me know if you run into any issues using the new chart controls. Next month, back to improving the File Upload control – more exciting stuff.

    Read the article

  • Speed up SQL Server queries with PREFETCH

    - by Akshay Deep Lamba
    Problem The SAN data volume has a throughput capacity of 400MB/sec; however my query is still running slow and it is waiting on I/O (PAGEIOLATCH_SH). Windows Performance Monitor shows data volume speed of 4MB/sec. Where is the problem and how can I find the problem? Solution This is another summary of a great article published by R. Meyyappan at www.sqlworkshops.com.  In my opinion, this is the first article that highlights and explains with working examples how PREFETCH determines the performance of a Nested Loop join.  First of all, I just want to recall that Prefetch is a mechanism with which SQL Server can fire up many I/O requests in parallel for a Nested Loop join. When SQL Server executes a Nested Loop join, it may or may not enable Prefetch accordingly to the number of rows in the outer table. If the number of rows in the outer table is greater than 25 then SQL will enable and use Prefetch to speed up query performance, but it will not if it is less than 25 rows. In this section we are going to see different scenarios where prefetch is automatically enabled or disabled. These examples only use two tables RegionalOrder and Orders.  If you want to create the sample tables and sample data, please visit this site www.sqlworkshops.com. The breakdown of the data in the RegionalOrders table is shown below and the Orders table contains about 6 million rows. In this first example, I am creating a stored procedure against two tables and then execute the stored procedure.  Before running the stored proceudre, I am going to include the actual execution plan. --Example provided by www.sqlworkshops.com --Create procedure that pulls orders based on City --Do not forget to include the actual execution plan CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City END GO SET STATISTICS time ON GO --Example provided by www.sqlworkshops.com --Execute the procedure with parameter SmallCity1 EXEC RegionalOrdersProc 'SmallCity1' GO After running the stored procedure, if we right click on the Clustered Index Scan and click Properties we can see the Estimated Numbers of Rows is 24.    If we right click on Nested Loops and click Properties we do not see Prefetch, because it is disabled. This behavior was expected, because the number of rows containing the value ‘SmallCity1’ in the outer table is less than 25.   Now, if I run the same procedure with parameter ‘BigCity’ will Prefetch be enabled? --Example provided by www.sqlworkshops.com --Execute the procedure with parameter BigCity --We are using cached plan EXEC RegionalOrdersProc 'BigCity' GO As we can see from the below screenshot, prefetch is not enabled and the query takes around 7 seconds to execute. This is because the query used the cached plan from ‘SmallCity1’ that had prefetch disabled. Please note that even if we have 999 rows for ‘BigCity’ the Estimated Numbers of Rows is still 24.   Finally, let’s clear the procedure cache to trigger a new optimization and execute the procedure again. DBCC freeproccache GO EXEC RegionalOrdersProc 'BigCity' GO This time, our procedure runs under a second, Prefetch is enabled and the Estimated Number of Rows is 999.   The RegionalOrdersProc can be optimized by using the below example where we are using an optimizer hint. I have also shown some other hints that could be used as well. --Example provided by www.sqlworkshops.com --You can fix the issue by using any of the following --hints --Create procedure that pulls orders based on City DROP PROC RegionalOrdersProc GO CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City       --Hinting optimizer to use SmallCity2 for estimation       OPTION (optimize FOR (@City = 'SmallCity2'))       --Hinting optimizer to estimate for the currnet parameters       --option (recompile)       --Hinting optimize not to use histogram rather       --density for estimation (average of all 3 cities)       --option (optimize for (@City UNKNOWN))       --option (optimize for UNKNOWN) END GO Conclusion, this tip was mainly aimed at illustrating how Prefetch can speed up query execution and how the different number of rows can trigger this.

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >