Search Results

Search found 2272 results on 91 pages for 'fire dragon dol'.

Page 90/91 | < Previous Page | 86 87 88 89 90 91  | Next Page >

  • CheckBox ListView SelectedValues DependencyProperty Binding

    - by Ristogod
    I am writing a custom control that is a ListView that has a CheckBox on each item in the ListView to indicate that item is Selected. I was able to do so with the following XAML. <ListView x:Class="CheckedListViewSample.CheckBoxListView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" mc:Ignorable="d"> <ListView.Style> <Style TargetType="{x:Type ListView}"> <Setter Property="SelectionMode" Value="Multiple" /> <Style.Resources> <Style TargetType="ListViewItem"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="ListViewItem"> <Border BorderThickness="{TemplateBinding Border.BorderThickness}" Padding="{TemplateBinding Control.Padding}" BorderBrush="{TemplateBinding Border.BorderBrush}" Background="{TemplateBinding Panel.Background}" SnapsToDevicePixels="True"> <CheckBox IsChecked="{Binding IsSelected, RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type ListViewItem}}}"> <CheckBox.Content> <ContentPresenter Content="{TemplateBinding ContentControl.Content}" ContentTemplate="{TemplateBinding ContentControl.ContentTemplate}" HorizontalAlignment="{TemplateBinding Control.HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding Control.VerticalContentAlignment}" SnapsToDevicePixels="{TemplateBinding UIElement.SnapsToDevicePixels}" /> </CheckBox.Content> </CheckBox> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> </Style.Resources> </Style> </ListView.Style> </ListView> I however am trying to attempt one more feature. The ListView has a SelectedItems DependencyProperty that returns a collection of the Items that are checked. However, I need to implement a SelectedValues DependencyProperty. I also am implementing a SelectedValuesPath DependencyProperty. By using the SelectedValuesPath, I indicate the path where the values are found for each selected item. So if my items have an ID property, I can specify using the SelectedValuesPath property "ID". The SelectedValues property would then return a collection of ID values. I have this working also using this code in the code-behind: using System.Windows; using System.Windows.Controls; using System.ComponentModel; using System.Collections; using System.Collections.Generic; namespace CheckedListViewSample { /// <summary> /// Interaction logic for CheckBoxListView.xaml /// </summary> public partial class CheckBoxListView : ListView { public static DependencyProperty SelectedValuesPathProperty = DependencyProperty.Register("SelectedValuesPath", typeof(string), typeof(CheckBoxListView), new PropertyMetadata(string.Empty, null)); public static DependencyProperty SelectedValuesProperty = DependencyProperty.Register("SelectedValues", typeof(IList), typeof(CheckBoxListView), new PropertyMetadata(new List<object>(), null)); [Category("Appearance")] [Localizability(LocalizationCategory.NeverLocalize)] [Bindable(true)] public string SelectedValuesPath { get { return ((string)(base.GetValue(CheckBoxListView.SelectedValuesPathProperty))); } set { base.SetValue(CheckBoxListView.SelectedValuesPathProperty, value); } } [Bindable(true)] [DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)] [Category("Appearance")] public IList SelectedValues { get { return ((IList)(base.GetValue(CheckBoxListView.SelectedValuesPathProperty))); } set { base.SetValue(CheckBoxListView.SelectedValuesPathProperty, value); } } public CheckBoxListView() : base() { InitializeComponent(); base.SelectionChanged += new SelectionChangedEventHandler(CheckBoxListView_SelectionChanged); } private void CheckBoxListView_SelectionChanged(object sender, SelectionChangedEventArgs e) { List<object> values = new List<object>(); foreach (var item in SelectedItems) { if (string.IsNullOrWhiteSpace(SelectedValuesPath)) { values.Add(item); } else { try { values.Add(item.GetType().GetProperty(SelectedValuesPath).GetValue(item, null)); } catch { } } } base.SetValue(CheckBoxListView.SelectedValuesProperty, values); e.Handled = true; } } } My problem is that my binding only works one way right now. I'm having trouble trying to figure out how to implement my SelectedValues DependencyProperty so that I could Bind a Collection of values to it, and when the control is loaded, the CheckBoxes are checked with items that have values that correspond to the SelectedValues. I've considered using the PropertyChangedCallBack event, but can't quite figure out how I could write that to achieve my goal. I'm also unsure of how I find the correct ListViewItem to set it as Selected. And lastly, if I can find the ListViewItem and set it to be Selected, won't that fire my SelectionChanged event each time I set a ListViewItem to be Selected?

    Read the article

  • IRQ problem with 2.6.32/2.6.39 kernel on Debian Squeeze x86_64

    - by MasterM
    I recently assembled a new computer so that all hardware is pretty new. Since then I've been experiencing some problem with IRQs when running Debian 6.0. On random occasions, usually after an hour or so of running I hear a beep and this shows up in dmesg: [ 3537.762795] irq 16: nobody cared (try booting with the "irqpoll" option) [ 3537.762797] Pid: 0, comm: swapper Tainted: P W O 2.6.39-2-amd64 #1 [ 3537.762798] Call Trace: [ 3537.762799] <IRQ> [<ffffffff810924d4>] ? __report_bad_irq+0x3a/0xa2 [ 3537.762803] [<ffffffff810926a4>] ? note_interrupt+0x168/0x1da [ 3537.762805] [<ffffffff81090dd4>] ? handle_irq_event_percpu+0x171/0x18f [ 3537.762807] [<ffffffff8100e0e2>] ? read_tsc+0x5/0x16 [ 3537.762809] [<ffffffff8106b8a2>] ? update_ts_time_stats+0x32/0x6b [ 3537.762810] [<ffffffff81090e26>] ? handle_irq_event+0x34/0x52 [ 3537.762812] [<ffffffff81063fb7>] ? sched_clock_idle_wakeup_event+0x12/0x1c [ 3537.762813] [<ffffffff81092df2>] ? handle_fasteoi_irq+0x82/0xa4 [ 3537.762815] [<ffffffff8100aadb>] ? handle_irq+0x1a/0x23 [ 3537.762816] [<ffffffff8100a384>] ? do_IRQ+0x45/0xaa [ 3537.762818] [<ffffffff81332c93>] ? common_interrupt+0x13/0x13 [ 3537.762818] <EOI> [<ffffffff81332c8e>] ? common_interrupt+0xe/0x13 [ 3537.762821] [<ffffffff81026800>] ? native_safe_halt+0x2/0x3 [ 3537.762829] [<ffffffffa016ed58>] ? acpi_idle_do_entry+0x39/0x62 [processor] [ 3537.762831] [<ffffffffa016edde>] ? acpi_idle_enter_c1+0x5d/0xad [processor] [ 3537.762834] [<ffffffff81261033>] ? cpuidle_idle_call+0x11f/0x1cc [ 3537.762835] [<ffffffff81008dd2>] ? cpu_idle+0xab/0xe1 [ 3537.762837] [<ffffffff8169fc60>] ? start_kernel+0x3e0/0x3eb [ 3537.762838] [<ffffffff8169f3c8>] ? x86_64_start_kernel+0x102/0x10f [ 3537.762839] handlers: [ 3537.762840] [<ffffffffa0358d5a>] (rtl8169_interrupt+0x0/0x2d7 [r8169]) [ 3537.762842] [<ffffffffa08ff2ca>] (nv_kern_isr+0x0/0x54 [nvidia]) [ 3537.762902] Disabling IRQ #16 After that Xorg either hogs on CPU or is unstable (up to hanging the system completely). When I restart Xorg everything is fine again and the problem doesn't occur until next reboot. I tried to upgrade the kernel from stock 2.6.32 to 2.6.39 from unstable repository but that didn't help. Booting with irqpoll option only seems to prolong the initial time period after which the problem occurs. I'm using latest NVIDIA drivers and Realtek firmware from firmware-realtek package. I have two GTX 560Ti that run in SLI. Disabling SLI or taking out one card completely doesn't solve the problem either. Output of uname -a is: Linux whitestar 2.6.39-2-amd64 #1 SMP Wed Jun 8 11:01:04 UTC 2011 x86_64 GNU/Linux Output of lspci is: 00:00.0 Host bridge: Intel Corporation Sandy Bridge DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Sandy Bridge PCI Express Root Port (rev 09) 00:01.1 PCI bridge: Intel Corporation Sandy Bridge PCI Express Root Port (rev 09) 00:16.0 Communication controller: Intel Corporation Cougar Point HECI Controller #1 (rev 04) 00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 05) 00:1a.0 USB Controller: Intel Corporation Cougar Point USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation Cougar Point High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 1 (rev b5) 00:1c.1 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 2 (rev b5) 00:1c.2 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 3 (rev b5) 00:1c.4 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 5 (rev b5) 00:1c.6 PCI bridge: Intel Corporation 82801 PCI Bridge (rev b5) 00:1d.0 USB Controller: Intel Corporation Cougar Point USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation Cougar Point LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation Cougar Point 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation Cougar Point SMBus Controller (rev 05) 01:00.0 VGA compatible controller: nVidia Corporation Device 1200 (rev a1) 01:00.1 Audio device: nVidia Corporation Device 0e0c (rev a1) 02:00.0 VGA compatible controller: nVidia Corporation Device 1200 (rev a1) 02:00.1 Audio device: nVidia Corporation Device 0e0c (rev a1) 04:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) 06:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) 07:00.0 PCI bridge: Device 1b21:1080 (rev 01) 08:02.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8110SC/8169SC Gigabit Ethernet (rev 10) 08:03.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev c0) Contents of /proc/interrupts: CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 0: 77 0 0 0 0 0 0 0 IO-APIC-edge timer 1: 2 0 0 0 0 0 0 0 IO-APIC-edge i8042 8: 1 0 0 0 0 0 0 0 IO-APIC-edge rtc0 9: 0 0 0 0 0 0 0 0 IO-APIC-fasteoi acpi 12: 4 0 0 0 0 0 0 0 IO-APIC-edge i8042 16: 699083 0 0 0 0 0 0 0 IO-APIC-fasteoi nvidia, eth0 17: 87810 0 0 0 0 0 0 0 IO-APIC-fasteoi firewire_ohci, hda_intel, nvidia 18: 242 0 0 0 0 0 0 0 IO-APIC-fasteoi hda_intel 23: 85925 0 0 0 0 0 0 0 IO-APIC-fasteoi ehci_hcd:usb5, ehci_hcd:usb6 40: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 41: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 42: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 43: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 44: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 45: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 46: 79853 0 0 0 0 0 0 0 PCI-MSI-edge ahci 48: 1 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 49: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 50: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 51: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 52: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 53: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 54: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 55: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 56: 1 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 57: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 58: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 59: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 60: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 61: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 62: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 63: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 64: 173506 0 0 0 0 0 0 0 PCI-MSI-edge hda_intel NMI: 482 89 25 13 277 24 11 10 Non-maskable interrupts LOC: 783857 194752 114133 70577 372438 179065 117179 162016 Local timer interrupts SPU: 0 0 0 0 0 0 0 0 Spurious interrupts PMI: 482 89 25 13 277 24 11 10 Performance monitoring interrupts IWI: 0 0 0 0 0 0 0 0 IRQ work interrupts RES: 131917 46750 7432 3291 150003 9576 3435 3067 Rescheduling interrupts CAL: 2759 6563 7150 6997 5387 7140 7269 6678 Function call interrupts TLB: 4396 2038 1336 492 5434 1896 1121 606 TLB shootdowns TRM: 0 0 0 0 0 0 0 0 Thermal event interrupts THR: 0 0 0 0 0 0 0 0 Threshold APIC interrupts MCE: 0 0 0 0 0 0 0 0 Machine check exceptions MCP: 37 37 37 37 37 37 37 37 Machine check polls ERR: 0 MIS: 0 Last but not least, right after boot-up those lines are usually present in dmesg: [ 18.367094] hda-intel: IRQ timing workaround is activated for card #1. Suggest a bigger bdl_pos_adj. [ 18.458859] hda-intel: IRQ timing workaround is activated for card #2. Suggest a bigger bdl_pos_adj. I'm not sure if it's related or a symptom of a bigger problem so I'm posting it just in case. I don't really know what other information might be of relevance here. Don't hesitate to ask for more in the comments.

    Read the article

  • How do I drag and drop between two listboxes in XUL?

    - by pc1oad1etter
    I am trying to implement drag and drop between two listboxes. I have a few problems 1) I am not detecting any drag events of any kind from the source list box/ I do not seem to be able to drag from it 2) I can drag from my desktop to the target listbox and I am able to detect 'dragenter' 'dragover' and 'dragexit' events. I am noticing that the event parameter is undefined in my 'dragenter' callback - is this a problem? 3) I cannot figure out how to complete the drag and drop operation. From https://developer.mozilla.org/En/DragDrop/Drag_Operations#Performing_... "If the mouse was released over an element that is a valid drop target, that is, one that cancelled the last dragenter or dragover event, then the drop will be successful, and a drop event will fire at the target. Otherwise, the drag operation is cancelled and no drop event is fired." This seems to be referring to a 'drop' event, though there is not one listed at https://developer.mozilla.org/en/XUL/Events . I can't seem to detect the end of the drag in order to call one of the example 'doDrop()' functions that I find on MDC. My example, so far: http://pastebin.mozilla.org/713676 <?xml version="1.0"?> <?xml-stylesheet href="chrome://global/skin/" type="text/css"?> <window xmlns="http://www.mozilla.org/keymaster/gatekeeper/ there.is.only.xul" onload="initialize();"> <vbox> <hbox> <vbox> <description>List1</description> <listbox id="source" draggable="true"> <listitem label="1"/> <listitem label="3"/> <listitem label="4"/> <listitem label="5"/> </listbox> </vbox> <vbox> <description>List2</description> <listbox id="target" ondragenter="onDragEnter();"> <listitem label="2"/> </listbox> </vbox> </hbox> </vbox> <script type="application/x-javascript"> <![CDATA[ function initialize(){ jsdump('adding events'); var origin = document.getElementById("source"); origin.addEventListener("drag", onDrag, false); origin.addEventListener("dragdrop", onDragDrop, false); origin.addEventListener("dragend", onDragEnd, false); origin.addEventListener("dragstart", onDragStart, false); var target = document.getElementById("target"); target.addEventListener("dragenter", onDragEnter, false); target.addEventListener("dragover", onDragOver, false); target.addEventListener("dragexit", onDragExit, false); target.addEventListener("drop", onDrop, false); target.addEventListener("drag", onDrag, false); target.addEventListener("dragdrop", onDragDrop, false); } function onDrag(){ jsdump('onDrag'); } function onDragDrop(){ jsdump('onDragDrop'); } function onDragStart(){ jsdump('onDragStart'); } function onDragEnd(){ jsdump('onDragEnd'); } function onDragEnter(event){ //debugger; if(event){ jsdump('onDragEnter event.preventDefault()'); event.preventDefault(); }else{ jsdump("event undefined in onDragEnter"); } } function onDragExit(){ jsdump('onDragExit'); } function onDragOver(event){ //debugger; if(event){ //jsdump('onDragOver event.preventDefault()'); event.preventDefault(); }else{ jsdump("event undefined in onDragOver"); } } function onDrop(event){ jsdump('onDrop'); var data = event.dataTransfer.getData("text/plain"); event.target.textContent = data; event.preventDefault(); } function jsdump(str) { Components.classes['[email protected]/consoleservice;1'] .getService(Components.interfaces.nsIConsoleService) .logStringMessage(str); } ]]> </script> </window>

    Read the article

  • asp.net - csharp - jquery - looking for a better and usage solution

    - by LostLord
    hi my dear friends i have a little problem about using jquery...(i reeally do not know jquery but i forced to use it) i am using vs 2008 - asp.net web app with c# also i am using telerik controls in my pages also i am using sqldatasources (Connecting to storedprocedures) in my pages my pages base on master and content pages and in content pages i have mutiviews ================================================================================= in one of the views(inside one of those multiviews)i had made two radcombo boxes for country and city requirement like cascading dropdowns as parent and child combo boxes. i used old way for doing that , i mean i used update panel and in the SelectedIndexChange Event of Parent RadComboBox(Country) i Wrote this code : protected void RadcomboboxCountry_SelectedIndexChanged(object o, RadComboBoxSelectedIndexChangedEventArgs e) { hfSelectedCo_ID.Value = RadcomboboxCountry.SelectedValue; RadcomboboxCity.Items.Clear(); RadcomboboxCity.Items.Add(new RadComboBoxItem(" ...", "5")); RadcomboboxCity.DataBind(); RadcomboboxCity.SelectedIndex = 0; } my child radcombo box can fill by upper code , let me tell you how : the child sqldatasource have a sp that has a parameter and i fill that parameter by this line - hfSelectedCo_ID.Value = RadcbCoNameInInsert.SelectedValue; RadcbCoNameInInsert.SelectedValue means country ID. after doing that SelectedIndexChange Event of Parent RadComboBox(Country) could not be fire therefore i forced to set the autopostback property to true. afetr doing that every thing was ok until some one told me can u control focus and keydown of your radcombo boxes (when u press enter key on the parent combobox[country] , so child combobox gets focus -- and when u press upperkey on child radcombobox [city], so parent combobox[country] gets focus) (For Users That Do Not Want To Use Mouse for Input Info And Choose items) i told him this is web app , not win form and we can not do that. i googled it and i found jquery the only way for doing that ... so i started using jquery . i wrote this code with jquery for both of them : <script src="../JQuery/jquery-1.4.1.js" language="javascript" type="text/javascript"></script> <script type="text/javascript"> $(function() { $('input[id$=RadcomboboxCountry_Input]').focus(); $('input[id$=RadcomboboxCountry_Input]').select(); $('input[id$=RadcomboboxCountry_Input]').bind('keyup', function(e) { var code = (e.keyCode ? e.keyCode : e.which); if (code == 13) { -----------> Enter Key $('input[id$=RadcomboboxCity_Input]').focus(); $('input[id$=RadcomboboxCity_Input]').select(); } }); $('input[id$=RadcomboboxCity_Input]').bind('keyup', function(e) { var code = (e.keyCode ? e.keyCode : e.which); if (code == 38) { -----------> Upper Key $('input[id$=RadcomboboxCountry_Input]').focus(); $('input[id$=RadcomboboxCountry_Input]').select(); } }); }); </script> this jquery code worked BBBBBBUUUUUUUTTTTTT autopostback=true of the Parent RadComboBox Became A Problem , Because when SelectedIndex Change Of ParentRadComboBox is fired after that Telerik Skins runs and after that i lost parent ComboBox Focus and we should use mouse but we don't want it.... for fix this problem i decided to set autopostback of perentCB to false and convert protected void RadcomboboxCountry_SelectedIndexChanged(object o, RadComboBoxSelectedIndexChangedEventArgs e) { hfSelectedCo_ID.Value = RadcomboboxCountry.SelectedValue; RadcomboboxCity.Items.Clear(); RadcomboboxCity.Items.Add(new RadComboBoxItem(" ...", "5")); RadcomboboxCity.DataBind(); RadcomboboxCity.SelectedIndex = 0; } to a public non static method without parameters and call it with jquey like this : (i used onclientchanged property of parentcombo box like onclientchanged = "MyMethodForParentCB_InJquery();" insread of selectedindexchange event) public void MyMethodForParentCB_InCodeBehind() { hfSelectedCo_ID.Value = RadcomboboxCountry.SelectedValue; RadcomboboxCity.Items.Clear(); RadcomboboxCity.Items.Add(new RadComboBoxItem(" ...", "5")); RadcomboboxCity.DataBind(); RadcomboboxCity.SelectedIndex = 0; } for doing that i read the blow manual and do that step by step : ======================================================================= http://www.ajaxprojects.com/ajax/tutorialdetails.php?itemid=732 ======================================================================= but this manual is about static methods and this is my new problem ... when i am using static method like : public static void MyMethodForParentCB_InCodeBehind() { hfSelectedCo_ID.Value = RadcomboboxCountry.SelectedValue; RadcomboboxCity.Items.Clear(); RadcomboboxCity.Items.Add(new RadComboBoxItem(" ...", "5")); RadcomboboxCity.DataBind(); RadcomboboxCity.SelectedIndex = 0; } so i recieved some errors and this method could not recognize my controls and hidden field... one of those errors like this : Error 2 An object reference is required for the non-static field, method, or property 'Darman.SuperAdmin.Users.hfSelectedCo_ID' C:\Javad\Copy of Darman 6\Darman\SuperAdmin\Users.aspx.cs 231 13 Darman any idea or is there any way to call non static methods with jquery (i know we can not do that but is there another way to solve my problem)???????????????

    Read the article

  • Jquery Live Function

    - by marharépa
    Hi! I want to make this script to work as LIVE() function. Please help me! $(".img img").each(function() { $(this).cjObjectScaler({ destElem: $(this).parent(), method: "fit" }); }); the cjObjectScaler script (called in the html header) is this: (thanks for Doug Jones) (function ($) { jQuery.fn.imagesLoaded = function (callback) { var elems = this.filter('img'), len = elems.length; elems.bind('load', function () { if (--len <= 0) { callback.call(elems, this); } }).each(function () { // cached images don't fire load sometimes, so we reset src. if (this.complete || this.complete === undefined) { var src = this.src; // webkit hack from http://groups.google.com/group/jquery-dev/browse_thread/thread/eee6ab7b2da50e1f this.src = '#'; this.src = src; } }); }; })(jQuery); /* CJ Object Scaler */ (function ($) { jQuery.fn.cjObjectScaler = function (options) { /* user variables (settings) ***************************************/ var settings = { // must be a jQuery object method: "fill", // the parent object to scale our object into destElem: null, // fit|fill fade: 0 // if positive value, do hide/fadeIn }; /* system variables ***************************************/ var sys = { // function parameters version: '2.1.1', elem: null }; /* scale the image ***************************************/ function scaleObj(obj) { // declare some local variables var destW = jQuery(settings.destElem).width(), destH = jQuery(settings.destElem).height(), ratioX, ratioY, scale, newWidth, newHeight, borderW = parseInt(jQuery(obj).css("borderLeftWidth"), 10) + parseInt(jQuery(obj).css("borderRightWidth"), 10), borderH = parseInt(jQuery(obj).css("borderTopWidth"), 10) + parseInt(jQuery(obj).css("borderBottomWidth"), 10), objW = jQuery(obj).width(), objH = jQuery(obj).height(); // check for valid border values. IE takes in account border size when calculating width/height so just set to 0 borderW = isNaN(borderW) ? 0 : borderW; borderH = isNaN(borderH) ? 0 : borderH; // calculate scale ratios ratioX = destW / jQuery(obj).width(); ratioY = destH / jQuery(obj).height(); // Determine which algorithm to use if (!jQuery(obj).hasClass("cf_image_scaler_fill") && (jQuery(obj).hasClass("cf_image_scaler_fit") || settings.method === "fit")) { scale = ratioX < ratioY ? ratioX : ratioY; } else if (!jQuery(obj).hasClass("cf_image_scaler_fit") && (jQuery(obj).hasClass("cf_image_scaler_fill") || settings.method === "fill")) { scale = ratioX < ratioY ? ratioX : ratioY; } // calculate our new image dimensions newWidth = parseInt(jQuery(obj).width() * scale, 10) - borderW; newHeight = parseInt(jQuery(obj).height() * scale, 10) - borderH; // Set new dimensions & offset jQuery(obj).css({ "width": newWidth + "px", "height": newHeight + "px"//, // "position": "absolute", // "top": (parseInt((destH - newHeight) / 2, 10) - parseInt(borderH / 2, 10)) + "px", // "left": (parseInt((destW - newWidth) / 2, 10) - parseInt(borderW / 2, 10)) + "px" }).attr({ "width": newWidth, "height": newHeight }); // do our fancy fade in, if user supplied a fade amount if (settings.fade > 0) { jQuery(obj).fadeIn(settings.fade); } } /* set up any user passed variables ***************************************/ if (options) { jQuery.extend(settings, options); } /* main ***************************************/ return this.each(function () { sys.elem = this; // if they don't provide a destObject, use parent if (settings.destElem === null) { settings.destElem = jQuery(sys.elem).parent(); } // need to make sure the user set the parent's position. Things go bonker's if not set. // valid values: absolute|relative|fixed if (jQuery(settings.destElem).css("position") === "static") { jQuery(settings.destElem).css({ "position": "relative" }); } // if our object to scale is an image, we need to make sure it's loaded before we continue. if (typeof sys.elem === "object" && typeof settings.destElem === "object" && typeof settings.method === "string") { // if the user supplied a fade amount, hide our image if (settings.fade > 0) { jQuery(sys.elem).hide(); } if (sys.elem.nodeName === "IMG") { // to fix the weird width/height caching issue we set the image dimensions to be auto; jQuery(sys.elem).width("auto"); jQuery(sys.elem).height("auto"); // wait until the image is loaded before scaling jQuery(sys.elem).imagesLoaded(function () { scaleObj(this); }); } else { scaleObj(jQuery(sys.elem)); } } else { console.debug("CJ Object Scaler could not initialize."); return; } }); }; })(jQuery);

    Read the article

  • JavaFX FXML communication between Application and Controller classes

    - by likethesky
    I am trying to get and destroy an external process I've created via ProcessBuilder in my FXML application close, but it's not working. This is based on the helpful advice Sergey Grinev gave me here. I have tried running with/without the "// myController.setApp(this);" and with "// super.stop();" at top of subclass and at bottom (see commented out/in for that line in MyApp), but no combination works. This probably isn't related to FXML or JavaFX, though I imagine this is a common pattern for developing apps on JavaFX. I suppose I'm asking for a Java best practice for closing dependent processes in a UI-based app like this one (in this case: FXML / JavaFX based), where there is a controller class and an application class. Can you explain what I'm doing wrong? Or better: advise what I should be doing instead? Thanks. In my Application I do this: public class MyApp extends Application { @Override public void start(Stage primaryStage) throws Exception { FXMLLoader fxmlLoader = new FXMLLoader(); Scene scene = (Scene)FXMLLoader.load(getClass().getResource("MyApp.fxml")); MyAppController myController = (MyAppController)fxmlLoader.getController(); primaryStage.setScene(scene); primaryStage.show(); // myController.setApp(this); } @Override public void stop() throws Exception { // super.stop(); // this is called on fx app close, you may call it in an action handler too if (MyAppController.getScriptProcess() != null) { MyAppController.getScriptProcess().destroy(); } super.stop(); } public static void main(String[] args) { launch(args); } } In my Controller I do this: public class MyAppController implements Initializable { private Application app; private static Process scriptProcess; public void setApp(Application a) { app = a; } public static Process getScriptProcess() { return scriptProcess; } } The result when I run with the "commented-out setApp()" not commented out (that is, left in the start method), is the following, immediately upon launch (the main Scene flashes, then disappears, then this dialog appears: "JavaFX Launcher Error: Exception while running Application" And it gives an, "Exception in Application start method" in the console as well. The result when I leave out the "commented-out code" in my MyApp above (that is, remove the "setApp()" from the start method), is that my app does indeed close, but gives this error when it closes: Exception in thread "JavaFX Application Thread" java.lang.RuntimeException: java.lang.reflect.InvocationTargetException at javafx.fxml.FXMLLoader$ControllerMethodEventHandler.handle(FXMLLoader.java:1440) at com.sun.javafx.event.CompositeEventHandler.dispatchBubblingEvent(CompositeEventHandler.java:69) at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:217) at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:170) at com.sun.javafx.event.CompositeEventDispatcher.dispatchBubblingEvent(CompositeEventDispatcher.java:38) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:37) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:35) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:35) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.EventUtil.fireEventImpl(EventUtil.java:53) at com.sun.javafx.event.EventUtil.fireEvent(EventUtil.java:28) at javafx.event.Event.fireEvent(Event.java:171) at javafx.scene.Node.fireEvent(Node.java:6863) at javafx.scene.control.Button.fire(Button.java:179) at com.sun.javafx.scene.control.behavior.ButtonBehavior.mouseReleased(ButtonBehavior.java:193) at com.sun.javafx.scene.control.skin.SkinBase$4.handle(SkinBase.java:336) at com.sun.javafx.scene.control.skin.SkinBase$4.handle(SkinBase.java:329) at com.sun.javafx.event.CompositeEventHandler.dispatchBubblingEvent(CompositeEventHandler.java:64) at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:217) at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:170) at com.sun.javafx.event.CompositeEventDispatcher.dispatchBubblingEvent(CompositeEventDispatcher.java:38) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:37) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:35) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:35) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:35) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.EventUtil.fireEventImpl(EventUtil.java:53) at com.sun.javafx.event.EventUtil.fireEvent(EventUtil.java:33) at javafx.event.Event.fireEvent(Event.java:171) at javafx.scene.Scene$MouseHandler.process(Scene.java:3324) at javafx.scene.Scene$MouseHandler.process(Scene.java:3164) at javafx.scene.Scene$MouseHandler.access$1900(Scene.java:3119) at javafx.scene.Scene.impl_processMouseEvent(Scene.java:1559) at javafx.scene.Scene$ScenePeerListener.mouseEvent(Scene.java:2261) at com.sun.javafx.tk.quantum.GlassViewEventHandler.handleMouseEvent(GlassViewEventHandler.java:228) at com.sun.glass.ui.View.handleMouseEvent(View.java:528) at com.sun.glass.ui.View.notifyMouse(View.java:922) at com.sun.glass.ui.gtk.GtkApplication._runLoop(Native Method) at com.sun.glass.ui.gtk.GtkApplication$3$1.run(GtkApplication.java:82) at java.lang.Thread.run(Thread.java:722) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at javafx.fxml.FXMLLoader$ControllerMethodEventHandler.handle(FXMLLoader.java:1435) ... 44 more Caused by: java.lang.NullPointerException at mypackage.MyController.handleCancel(MyController.java:300) ... 49 more Clean up...

    Read the article

  • Moving the swapfiles to a dedicated partition in Snow Leopard

    - by e.James
    I have been able to move Apple's virtual memory swapfiles to a dedicated partition on my hard drive up until now. The technique I have been using is described in a thread on forums.macosxhints.com. However, with the developer preview of Snow Leopard, this method no longer works. Does anyone know how it could be done with the new OS? Update: I have marked dblu's answer as accepted even though it didn't quite work because he gave excellent, detailed instructions and because his suggestion to use plutil ultimately pointed me in the right direction. The complete, working solution is posted here in the question because I don't have enough reputation to edit the accepted answer. Complete solution: 1. Open Terminal and make a backup copy of Apple's default dynamic_pager.plist: $ cd /System/Library/LaunchDaemons $ sudo cp com.apple.dynamic_pager.plist{,_bak} 2. Convert the plist from binary to plain XML: $ sudo plutil -convert xml1 com.apple.dynamic_pager.plist 3. Open the converted plist with your text editor of choice. (I use pico, see dblu's answer for an example using vim): $ sudo pico -w com.apple.dynamic_pager.plist It should look as follows: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs$ <plist version="1.0"> <dict> <key>EnableTransactions</key> <true/> <key>HopefullyExitsLast</key> <true/> <key>Label</key> <string>com.apple.dynamic_pager</string> <key>OnDemand</key> <false/> <key>ProgramArguments</key> <array> <string>/sbin/dynamic_pager</string> <string>-F</string> <string>/private/var/vm/swapfile</string> </array> </dict> </plist> 4. Change the ProgramArguments array (lines 13 through 18) so that it launches an intermediate shell script instead of launching dynamic_pager directly. See note #1 for details on why this is necessary. <key>ProgramArguments</key> <array> <string>/sbin/dynamic_pager_init</string> </array> 5. Save the plist, and return to the terminal prompt. Using pico, the commands would be: <ctrl+o> to save the file <enter> to accept the same filename (com.apple.dynamic_pager.plist) <ctrl+x> to exit 6. Convert the modified plist back to binary: $ sudo plutil -convert binary1 com.apple.dynamic_pager.plist 7. Create the intermediate shell script: $ cd /sbin $ sudo pico -w dynamic_pager_init The script should look as follows (my partition is called 'Swap', and I chose to put the swapfiles in a hidden directory on that partition, called '.vm' be sure that the directory you specify actually exists): Update: This version of the script makes use of wait4path as suggested by ZILjr: #!/bin/bash #launch Apple's dynamic_pager only when the swap volume is mounted echo "Waiting for Swap volume to mount"; wait4path /Volumes/Swap; echo "Launching dynamic pager on volume Swap"; /sbin/dynamic_pager -F /Volumes/Swap/.vm/swapfile; 8. Save and close dynamic_pager_init (same commands as step 5) 9. Modify permissions and ownership for dynamic_pager_init: $ sudo chmod a+x-w /sbin/dynamic_pager_init $ sudo chown root:wheel /sbin/dynamic_pager_init 10. Verify the permissions on dynamic_pager_init: $ ls -l dynamic_pager_init -r-xr-xr-x 1 root wheel 6 18 Sep 15:11 dynamic_pager_init 11. Restart your Mac. If you run into trouble, switch to verbose startup mode by holding down Command-v immediately after the startup chime. This will let you see all of the startup messages that appear during startup. If you run into even worse trouble (i.e. you never see the login screen), hold down Command-s instead. This will boot the computer in single-user mode (no graphical UI, just a command prompt) and allow you to restore the backup copy of com.apple.dynamic_pager.plist that you made in step 1. 12. Once the computer boots, fire up Terminal and verify that the swap files have actually been moved: $ cd /Volumes/Swap/.vm $ ls -l You should see something like this: -rw------- 1 someUser staff 67108864 18 Sep 12:02 swapfile0 13. Delete the old swapfiles: $ cd /private/var/vm $ sudo rm swapfile* 14. Profit! Note 1 Simply modifying the arguments to dynamic_pager in the plist does not always work, and when it fails, it does so in a spectacularly silent way. The problem stems from the fact that dynamic_pager is launched very early in the startup process. If your swap partition has not yet been mounted when dynamic_pager is first loaded (in my experience, this happens 99% of the time), then the system will fake its way through. It will create a symbolic link in your /Volumes directory which has the same name as your swap partition, but points back to the default swapfile location (/private/var/vm). Then, when your actual swap partition mounts, it will be given the name Swap 1 (or YourDriveName 1). You can see the problem by opening up Terminal and listing the contents of your /Volumes directory: $ cd /Volumes $ ls -l You will see something like this: drwxrwxrwx 11 yourUser staff 442 16 Sep 12:13 Swap -> private/var/vm drwxrwxrwx 14 yourUser staff 5 16 Sep 12:13 Swap 1 lrwxr-xr-x 1 root admin 1 17 Sep 12:01 System -> / Note that this failure can be very hard to spot. If you were to check for the swapfiles as I show in step 12, you would still see them! The symbolic link would make it seem as though your swapfiles had been moved, even though they were actually being stored in the default location. Note 2 I was originally unable to get this to work in Snow Leopard because com.apple.dynamic_pager.plist was stored in binary format. I made a copy of the original file and opened it with Apple's Property List Editor (available with Xcode) in order to make changes, but this process added some extended attributes to the plist file which caused the system to ignore it and just use the defaults. As dblu pointed out, using plutil to convert the file to plain XML works like a charm. Note 3 You can check the Console application to see any messages that dynamic_pager_init echos to the screen. If you see the following lines repeated over and over again, there is a problem with the setup. I ran into these messages because I forgot to create the '.vm' directory that I specified in dynamic_pager_init. com.apple.launchd[1] (com.apple.dynamic_pager[176]) Exited with exit code: 1 com.apple.launchd[1] (com.apple.dynamic_pager) Throttling respawn: Will start in 10 seconds When everything is working properly, you may see the above message a couple of times, but you should also see the following message, and then no more of the "Throttling respawn" messages afterwards. com.apple.dynamic_pager[???] Launching dynamic pager on volume Swap This means that the script did have to wait for the partition to load, but in the end it was successful.

    Read the article

  • MVC 3 Remote Validation jQuery error on submit

    - by Richard Reddy
    I seem to have a weird issue with remote validation on my project. I am doing a simple validation check on an email field to ensure that it is unique. I've noticed that unless I put the cursor into the textbox and then remove it to trigger the validation at least once before submitting my form I will get a javascript error. e[h] is not a function jquery.min.js line 3 If I try to resubmit the form after the above error is returned everything works as expected. It's almost like the form tried to submit before waiting for the validation to return or something. Am I required to silently fire off a remote validation request on submit before submitting my form? Below is a snapshot of the code I'm using: (I've also tried GET instead of POST but I get the same result). As mentioned above, the code works fine but the form returns a jquery error unless the validation is triggered at least once. Model: public class RegisterModel { [Required] [Remote("DoesUserNameExist", "Account", HttpMethod = "POST", ErrorMessage = "User name taken.")] [Display(Name = "User name")] public string UserName { get; set; } [Required] [Display(Name = "Firstname")] public string Firstname { get; set; } [Display(Name = "Surname")] public string Surname { get; set; } [Required] [Remote("DoesEmailExist", "Account", HttpMethod = "POST", ErrorMessage = "Email taken.", AdditionalFields = "UserName")] [Display(Name = "Email address")] public string Email { get; set; } [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 8)] [DataType(DataType.Password)] [Display(Name = "Password")] public string Password { get; set; } [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 8)] [DataType(DataType.Password)] [Display(Name = "Confirm password")] public string ConfirmPassword { get; set; } [Display(Name = "Approved?")] public bool IsApproved { get; set; } } public class UserRoleModel { [Display(Name = "Assign Roles")] public IEnumerable<RoleViewModel> AllRoles { get; set; } public RegisterModel RegisterUser { get; set; } } Controller: // POST: /Account/DoesEmailExist // passing in username so that I can ignore the same email address for the same user on edit page [HttpPost] public JsonResult DoesEmailExist([Bind(Prefix = "RegisterUser.Email")]string Email, [Bind(Prefix = "RegisterUser.UserName")]string UserName) { var user = Membership.GetUserNameByEmail(Email); if (!String.IsNullOrEmpty(UserName)) { if (user == UserName) return Json(true); } return Json(user == null); } View: <script src="//ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js" type="text/javascript"></script> <script src="//ajax.googleapis.com/ajax/libs/jqueryui/1.8.17/jquery-ui.min.js" type="text/javascript"></script> <script type="text/javascript" src="/Content/web/js/jquery.unobtrusive-ajax.min.js"></script> <script type="text/javascript" src="/Content/web/js/jquery.validate.min.js"></script> <script type="text/javascript" src="/Content/web/js/jquery.validate.unobtrusive.min.js"></script> ...... @using (Html.BeginForm()) { @Html.AntiForgeryToken() <div class="titleh"> <h3>Edit a user account</h3> </div> <div class="body"> @Html.HiddenFor(model => model.RegisterUser.UserName) @Html.Partial("_CreateOrEdit", Model) <div class="st-form-line"> <span class="st-labeltext">@Html.LabelFor(model => model.RegisterUser.IsApproved)</span> @Html.RadioButtonFor(model => model.RegisterUser.IsApproved, true, new { @class = "uniform" }) Active @Html.RadioButtonFor(model => model.RegisterUser.IsApproved, false, new { @class = "uniform" }) Disabled <div class="clear"></div> </div> <div class="button-box"> <input type="submit" name="submit" value="Save" class="st-button"/> @Html.ActionLink("Back to List", "Index", null, new { @class = "st-clear" }) </div> </div> } CreateEdit Partial View @model Project.Domain.Entities.UserRoleModel <div class="st-form-line"> <span class="st-labeltext">@Html.LabelFor(m => m.RegisterUser.Firstname)</span> @Html.TextBoxFor(m => m.RegisterUser.Firstname, new { @class = "st-forminput", @style = "width:300px" }) @Html.ValidationMessageFor(m => m.RegisterUser.Firstname) <div class="clear"></div> </div> <div class="st-form-line"> <span class="st-labeltext">@Html.LabelFor(m => m.RegisterUser.Surname)</span> @Html.TextBoxFor(m => m.RegisterUser.Surname, new { @class = "st-forminput", @style = "width:300px" }) @Html.ValidationMessageFor(m => m.RegisterUser.Surname) <div class="clear"></div> </div> <div class="st-form-line"> <span class="st-labeltext">@Html.LabelFor(m => m.RegisterUser.Email)</span> @Html.TextBoxFor(m => m.RegisterUser.Email, new { @class = "st-forminput", @style = "width:300px" }) @Html.ValidationMessageFor(m => m.RegisterUser.Email) <div class="clear"></div> </div> Thanks, Rich

    Read the article

  • Data table columns become out of order after changing data source.

    - by Scott Chamberlain
    This is kind of a oddball problem so I will try to describe the best that I can. I have a DataGridView that shows a list of contracts and various pieces of information about them. There are three view modes: Contract Approval, Pre-Production, and Production. Each mode has it's own set of columns that need to be displayed. What I have been doing is I have three radio buttons one for each contract style. all of them fire their check changed on this function private void rbContracts_CheckedChanged(object sender, EventArgs e) { dgvContracts.Columns.Clear(); if (((RadioButton)sender).Checked == true) { if (sender == rbPreProduction) { dgvContracts.Columns.AddRange(searchSettings.GetPreProductionColumns()); this.contractsBindingSource.DataMember = "Preproduction"; this.preproductionTableAdapter.Fill(this.searchDialogDataSet.Preproduction); } else if (sender == rbProduction) { dgvContracts.Columns.AddRange(searchSettings.GetProductionColumns()); this.contractsBindingSource.DataMember = "Production"; this.productionTableAdapter.Fill(this.searchDialogDataSet.Production); } else if (sender == rbContracts) { dgvContracts.Columns.AddRange(searchSettings.GetContractsColumns()); this.contractsBindingSource.DataMember = "Contracts"; this.contractsTableAdapter.Fill(this.searchDialogDataSet.Contracts); } } } Here is the GetxxxColumns function public DataGridViewColumn[] GetPreProductionColumns() { this.dgvTxtPreAccount.Visible = DgvTxtPreAccountVisable; this.dgvTxtPreImpromedAccNum.Visible = DgvTxtPreImpromedAccNumVisable; this.dgvTxtPreCreateDate.Visible = DgvTxtPreCreateDateVisable; this.dgvTxtPreCurrentSoftware.Visible = DgvTxtPreCurrentSoftwareVisable; this.dgvTxtPreConversionRequired.Visible = DgvTxtPreConversionRequiredVisable; this.dgvTxtPreConversionLevel.Visible = DgvTxtPreConversionLevelVisable; this.dgvTxtPreProgrammer.Visible = DgvTxtPreProgrammerVisable; this.dgvCbxPreEdge.Visible = DgvCbxPreEdgeVisable; this.dgvCbxPreEducationRequired.Visible = DgvCbxPreEducationRequiredVisable; this.dgvTxtPreTargetMonth.Visible = DgvTxtPreTargetMonthVisable; this.dgvCbxPreEdgeDatesDate.Visible = DgvCbxPreEdgeDatesDateVisable; this.dgvTxtPreStartDate.Visible = DgvTxtPreStartDateVisable; this.dgvTxtPreUserName.Visible = DgvTxtPreUserNameVisable; this.dgvCbxPreProductionId.Visible = DgvCbxPreProductionIdVisable; return new System.Windows.Forms.DataGridViewColumn[] { this.dgvTxtPreAccount, this.dgvTxtPreImpromedAccNum, this.dgvTxtPreCreateDate, this.dgvTxtPreCurrentSoftware, this.dgvTxtPreConversionRequired, this.dgvTxtPreConversionLevel, this.dgvTxtPreProgrammer, this.dgvCbxPreEdge, this.dgvCbxPreEducationRequired, this.dgvTxtPreTargetMonth, this.dgvCbxPreEdgeDatesDate, this.dgvTxtPreStartDate, this.dgvTxtPreUserName, this.dgvCbxPreProductionId, this.dgvTxtCmnHold, this.dgvTxtCmnConcern, this.dgvTxtCmnAccuracyStatus, this.dgvTxtCmnEconomicStatus, this.dgvTxtCmnSoftwareStatus, this.dgvTxtCmnServiceStatus, this.dgvTxtCmnHardwareStatus, this.dgvTxtCmnAncillaryStatus, this.dgvTxtCmnFlowStatus, this.dgvTxtCmnImpromedAccountNum, this.dgvTxtCmnOpportunityId}; } public DataGridViewColumn[] GetProductionColumns() { this.dgvcTxtProAccount.Visible = DgvTxtProAccountVisable; this.dgvTxtProImpromedAccNum.Visible = DgvTxtProImpromedAccNumVisable; this.dgvTxtProCreateDate.Visible = DgvTxtProCreateDateVisable; this.dgvTxtProConvRequired.Visible = DgvTxtProConvRequiredVisable; this.dgvTxtProEdgeRequired.Visible = DgvTxtProEdgeRequiredVisable; this.dgvTxtProStartDate.Visible = DgvTxtProStartDateVisable; this.dgvTxtProHardwareRequired.Visible = DgvTxtProHardwareReqiredVisable; this.dgvTxtProStandardDate.Visible = DgvTxtProStandardDateVisable; this.dgvTxtProSystemScheduleDate.Visible = DgvTxtProSystemScheduleDateVisable; this.dgvTxtProHwSystemCompleteDate.Visible = DgvTxtProHwSystemCompleteDateVisable; this.dgvTxtProHardwareTechnician.Visible = DgvTxtProHardwareTechnicianVisable; return new System.Windows.Forms.DataGridViewColumn[] { this.dgvcTxtProAccount, this.dgvTxtProImpromedAccNum, this.dgvTxtProCreateDate, this.dgvTxtProConvRequired, this.dgvTxtProEdgeRequired, this.dgvTxtProStartDate, this.dgvTxtProHardwareRequired, this.dgvTxtProStandardDate, this.dgvTxtProSystemScheduleDate, this.dgvTxtProHwSystemCompleteDate, this.dgvTxtProHardwareTechnician, this.dgvTxtCmnHold, this.dgvTxtCmnConcern, this.dgvTxtCmnAccuracyStatus, this.dgvTxtCmnEconomicStatus, this.dgvTxtCmnSoftwareStatus, this.dgvTxtCmnServiceStatus, this.dgvTxtCmnHardwareStatus, this.dgvTxtCmnAncillaryStatus, this.dgvTxtCmnFlowStatus, this.dgvTxtCmnImpromedAccountNum, this.dgvTxtCmnOpportunityId}; } public DataGridViewColumn[] GetContractsColumns() { this.dgvTxtConAccount.Visible = this.DgvTxtConAccountVisable; this.dgvTxtConAccuracyStatus.Visible = this.DgvTxtConAccuracyStatusVisable; this.dgvTxtConCreateDate.Visible = this.DgvTxtConCreateDateVisable; this.dgvTxtConEconomicStatus.Visible = this.DgvTxtConEconomicStatusVisable; this.dgvTxtConHardwareStatus.Visible = this.DgvTxtConHardwareStatusVisable; this.dgvTxtConImpromedAccNum.Visible = this.DgvTxtConImpromedAccNumVisable; this.dgvTxtConServiceStatus.Visible = this.DgvTxtConServiceStatusVisable; this.dgvTxtConSoftwareStatus.Visible = this.DgvTxtConSoftwareStatusVisable; this.dgvCbxConPreProductionId.Visible = this.DgvCbxConPreProductionIdVisable; this.dgvCbxConProductionId.Visible = this.DgvCbxConProductionVisable; return new System.Windows.Forms.DataGridViewColumn[] { this.dgvTxtConAccount, this.dgvTxtConImpromedAccNum, this.dgvTxtConCreateDate, this.dgvTxtConAccuracyStatus, this.dgvTxtConEconomicStatus, this.dgvTxtConSoftwareStatus, this.dgvTxtConServiceStatus, this.dgvTxtConHardwareStatus, this.dgvCbxConPreProductionId, this.dgvCbxConProductionId, this.dgvTxtCmnHold, this.dgvTxtCmnConcern, this.dgvTxtCmnAccuracyStatus, this.dgvTxtCmnEconomicStatus, this.dgvTxtCmnSoftwareStatus, this.dgvTxtCmnServiceStatus, this.dgvTxtCmnHardwareStatus, this.dgvTxtCmnAncillaryStatus, this.dgvTxtCmnFlowStatus, this.dgvTxtCmnImpromedAccountNum, this.dgvTxtCmnOpportunityId}; } The issue is when I check a button the first time, everything shows up ok. I choose another view, everything is ok. But when I click on the first view the columns are out of order (it is like they are in reverse order but it is not exactly the same). this happens only to the first page you click on, the other two are fine. You can click off and click back on as many times as you want after those initial steps, The first list you selected at the start will be out of order the other two will be correct. Any ideas on what could be causing this?

    Read the article

  • How to detect crashing tabed webbrowser and handle it?

    - by David Eaton
    I have a desktop application (forms) with a tab control, I assign a tab and a new custom webrowser control. I open up about 10 of these tabs. Each one visits about 100 - 500 different pages. The trouble is that if 1 webbrowser control has a problem it shuts down the entire program. I want to be able to close the offending webbrowser control and open a new one in it's place. Is there any event that I need to subscribe to catch a crashing or unresponsive webbrowser control ? I am using C# on windows 7 (Forms), .NET framework v4 =============================================================== UPDATE: 1 - The Tabbed WebBrowser Example Here is the code I have and How I use the webbrowser control in the most basic way. Create a new forms project and name it SimpleWeb Add a new class and name it myWeb.cs, here is the code to use. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows.Forms; using System.Security.Policy; namespace SimpleWeb { //inhert all of webbrowser class myWeb : WebBrowser { public myWeb() { //no javascript errors this.ScriptErrorsSuppressed = true; //Something we want set? AssignEvents(); } //keep near the top private void AssignEvents() { //assign WebBrowser events to our custom methods Navigated += myWeb_Navigated; DocumentCompleted += myWeb_DocumentCompleted; Navigating += myWeb_Navigating; NewWindow += myWeb_NewWindow; } #region Events //List of events:http://msdn.microsoft.com/en-us/library/system.windows.forms.webbrowser_events%28v=vs.100%29.aspx //Fired when a new windows opens private void myWeb_NewWindow(object sender, System.ComponentModel.CancelEventArgs e) { //cancel all popup windows e.Cancel = true; //beep to let you know canceled new window Console.Beep(9000, 200); } //Fired before page is navigated (not sure if its before or during?) private void myWeb_Navigating(object sender, System.Windows.Forms.WebBrowserNavigatingEventArgs args) { } //Fired after page is navigated (but not loaded) private void myWeb_Navigated(object sender, System.Windows.Forms.WebBrowserNavigatedEventArgs args) { } //Fired after page is loaded (Catch 22 - Iframes could be considered a page, can fire more than once. Ads are good examples) private void myWeb_DocumentCompleted(System.Object sender, System.Windows.Forms.WebBrowserDocumentCompletedEventArgs args) { } #endregion //Answer supplied by mo. (modified)? public void OpenUrl(string url) { try { //this.OpenUrl(url); this.Navigate(url); } catch (Exception ex) { MessageBox.Show("Your App Crashed! Because = " + ex.ToString()); //MyApplication.HandleException(ex); } } //Keep near the bottom private void RemoveEvents() { //Remove Events Navigated -= myWeb_Navigated; DocumentCompleted -= myWeb_DocumentCompleted; Navigating -= myWeb_Navigating; NewWindow -= myWeb_NewWindow; } } } On Form1 drag a standard tabControl and set the dock to fill, you can go into the tab collection and delete the pre-populated tabs if you like. Right Click on Form1 and Select "View Code" and replace it with this code. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using mshtml; namespace SimpleWeb { public partial class Form1 : Form { public Form1() { InitializeComponent(); //Load Up 10 Tabs for (int i = 0; i <= 10; i++) { newTab("Test_" + i, "http://wwww.yahoo.com"); } } private void newTab(string Title, String Url) { //Create a new Tab TabPage newTab = new TabPage(); newTab.Name = Title; newTab.Text = Title; //create webbrowser Instance myWeb newWeb = new myWeb(); //Add webbrowser to new tab newTab.Controls.Add(newWeb); newWeb.Dock = DockStyle.Fill; //Add New Tab to Tab Pages tabControl1.TabPages.Add(newTab); newWeb.OpenUrl(Url); } } } Save and Run the project. Using the answer below by mo. , you can surf the first url with no problem, but what about all the urls the user clicks on? How do we check those? I prefer not to add events to every single html element on a page, there has to be a way to run the new urls thru the function OpenUrl before it navigates without having an endless loop. Thanks.

    Read the article

  • Trying to draw textured triangles on device fails, but the emulator works. Why?

    - by Dinedal
    I have a series of OpenGL-ES calls that properly render a triangle and texture it with alpha blending on the emulator (2.0.1). When I fire up the same code on an actual device (Droid 2.0.1), all I get are white squares. This suggests to me that the textures aren't loading, but I can't figure out why they aren't loading. All of my textures are 32-bit PNGs with alpha channels, under res/raw so they aren't optimized per the sdk docs. Here's how I am loading my textures: private void loadGLTexture(GL10 gl, Context context, int reasource_id, int texture_id) { //Get the texture from the Android resource directory Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), reasource_id, sBitmapOptions); //Generate one texture pointer... gl.glGenTextures(1, textures, texture_id); //...and bind it to our array gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[texture_id]); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); //Different possible texture parameters, e.g. GL10.GL_CLAMP_TO_EDGE gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); //Use the Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); //Clean up bitmap.recycle(); } Here's how I am rendering the texture: //Clear gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); //Enable vertex buffer gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); //Push transformation matrix gl.glPushMatrix(); //Transformation matrices gl.glTranslatef(x, y, 0.0f); gl.glScalef(scalefactor, scalefactor, 0.0f); gl.glColor4f(1.0f,1.0f,1.0f,1.0f); //Bind the texture gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[textureid]); //Draw the vertices as triangles gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_BYTE, indexBuffer); //Pop the matrix back to where we left it gl.glPopMatrix(); //Disable the client state before leaving gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); And here are the options I have enabled: gl.glShadeModel(GL10.GL_SMOOTH); //Enable Smooth Shading gl.glEnable(GL10.GL_DEPTH_TEST); //Enables Depth Testing gl.glDepthFunc(GL10.GL_LEQUAL); //The Type Of Depth Testing To Do gl.glEnable(GL10.GL_TEXTURE_2D); gl.glEnable(GL10.GL_BLEND); gl.glBlendFunc(GL10.GL_SRC_ALPHA,GL10.GL_ONE_MINUS_SRC_ALPHA); Edit: I just tried supplying a BitmapOptions to the BitmapFactory.decodeResource() call, but this doesn't seem to fix the issue, despite manually setting the same preferredconfig, density, and targetdensity. Edit2: As requested, here is a screenshot of the emulator working. The underlaying triangles are shown with a circle texture rendered onto it, the transparency is working because you can see the black background. Here is a shot of what the droid does with the exact same code on it: Edit3: Here are my BitmapOptions, updated the call above with how I am now calling the BitmapFactory, still the same results as below: sBitmapOptions.inPreferredConfig = Bitmap.Config.RGB_565; sBitmapOptions.inDensity = 160; sBitmapOptions.inTargetDensity = 160; sBitmapOptions.inScreenDensity = 160; sBitmapOptions.inDither = false; sBitmapOptions.inSampleSize = 1; sBitmapOptions.inScaled = false; Here are my vertices, texture coords, and indices: /** The initial vertex definition */ private static final float vertices[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, -1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f }; /** The initial texture coordinates (u, v) */ private static final float texture[] = { //Mapping coordinates for the vertices 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; /** The initial indices definition */ private static final byte indices[] = { //Faces definition 0,1,3, 0,3,2 }; Is there anyway to dump the contents of the texture once it's been loaded into OpenGL ES? Maybe I can compare the emulator's loaded texture with the actual device's loaded texture? I did try with a different texture (the default android icon) and again, it works fine for the emulator but fails to render on the actual phone. Edit4: Tried switching around when I do texture loading. No luck. Tried using a constant offset of 0 to glGenTextures, no change. Is there something that I'm using that the emulator supports that the actual phone does not? Edit5: Per Ryan below, I resized my texture from 200x200 to 256x256, and the issue was NOT resolved. Edit: As requested, added the calls to glVertexPointer and glTexCoordPointer above. Also, here is the initialization of vertexBuffer, textureBuffer, and indexBuffer: ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); vertexBuffer = byteBuf.asFloatBuffer(); vertexBuffer.put(vertices); vertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(texture.length * 4); byteBuf.order(ByteOrder.nativeOrder()); textureBuffer = byteBuf.asFloatBuffer(); textureBuffer.put(texture); textureBuffer.position(0); indexBuffer = ByteBuffer.allocateDirect(indices.length); indexBuffer.put(indices); indexBuffer.position(0); loadGLTextures(gl, this.context);

    Read the article

  • SQL SERVER – Securing TRUNCATE Permissions in SQL Server

    - by pinaldave
    Download the Script of this article from here. On December 11, 2010, Vinod Kumar, a Databases & BI technology evangelist from Microsoft Corporation, graced Ahmedabad by spending some time with the Community during the Community Tech Days (CTD) event. As he was running through a few demos, Vinod asked the audience one of the most fundamental and common interview questions – “What is the difference between a DELETE and TRUNCATE?“ Ahmedabad SQL Server User Group Expert Nakul Vachhrajani has come up with excellent solutions of the same. I must congratulate Nakul for this excellent solution and as a encouragement to User Group member, I am publishing the same article over here. Nakul Vachhrajani is a Software Specialist and systems development professional with Patni Computer Systems Limited. He has functional experience spanning legacy code deprecation, system design, documentation, development, implementation, testing, maintenance and support of complex systems, providing business intelligence solutions, database administration, performance tuning, optimization, product management, release engineering, process definition and implementation. He has comprehensive grasp on Database Administration, Development and Implementation with MS SQL Server and C, C++, Visual C++/C#. He has about 6 years of total experience in information technology. Nakul is an member of the Ahmedabad and Gandhinagar SQL Server User Groups, and actively contributes to the community by actively participating in multiple forums and websites like SQLAuthority.com, BeyondRelational.com, SQLServerCentral.com and many others. Please note: The opinions expressed herein are Nakul own personal opinions and do not represent his employer’s view in anyway. All data from everywhere here on Earth go through a series of  four distinct operations, identified by the words: CREATE, READ, UPDATE and DELETE, or simply, CRUD. Putting in Microsoft SQL Server terms, is the process goes like this: INSERT, SELECT, UPDATE and DELETE/TRUNCATE. Quite a few interesting responses were received and evaluated live during the session. To summarize them, the most important similarity that came out was that both DELETE and TRUNCATE participate in transactions. The major differences (not all) that came out of the exercise were: DELETE: DELETE supports a WHERE clause DELETE removes rows from a table, row-by-row Because DELETE moves row-by-row, it acquires a row-level lock Depending upon the recovery model of the database, DELETE is a fully-logged operation. Because DELETE moves row-by-row, it can fire off triggers TRUNCATE: TRUNCATE does not support a WHERE clause TRUNCATE works by directly removing the individual data pages of a table TRUNCATE directly occupies a table-level lock. (Because a lock is acquired, and because TRUNCATE can also participate in a transaction, it has to be a logged operation) TRUNCATE is, therefore, a minimally-logged operation; again, this depends upon the recovery model of the database Triggers are not fired when TRUNCATE is used (because individual row deletions are not logged) Finally, Vinod popped the big homework question that must be critically analyzed: “We know that we can restrict a DELETE operation to a particular user, but how can we restrict the TRUNCATE operation to a particular user?” After returning home and having a nice cup of coffee, I noticed that my gray cells immediately started to work. Below was the result of my research. As what is always said, the devil is in the details. Upon looking at the Permissions section for the TRUNCATE statement in Books On Line, the following jumps right out: “The minimum permission required is ALTER on table_name. TRUNCATE TABLE permissions default to the table owner, members of the sysadmin fixed server role, and the db_owner and db_ddladmin fixed database roles, and are not transferable. However, you can incorporate the TRUNCATE TABLE statement within a module, such as a stored procedure, and grant appropriate permissions to the module using the EXECUTE AS clause.“ Now, what does this mean? Unlike DELETE, one cannot directly assign permissions to a user/set of users allowing or revoking TRUNCATE rights. However, there is a way to circumvent this. It is important to recall that in Microsoft SQL Server, database engine security surrounds the concept of a “securable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). urable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). SETTING UP THE ENVIRONMENT – (01A_Truncate Table Permissions.sql) Script Provided at the end of the article. By the end of this demo, one will be able to do all the CRUD operations, except the TRUNCATE, and the other will only be able to execute the TRUNCATE. All you will need for this test is any edition of SQL Server 2008. (With minor changes, these scripts can be made to work with SQL 2005.) We begin by creating the following: 1.       A test database 2.        Two database roles: associated logins and users 3.       Switch over to the test database and create a test table. Then, add some data into it. I am using row constructors, which is new to SQL 2008. Creating the modules that will be used to enforce permissions 1.       We have already created one of the modules that we will be assigning permissions to. That module is the table: TruncatePermissionsTest 2.       We will now create two stored procedures; one is for the DELETE operation and the other for the TRUNCATE operation. Please note that for all practical purposes, the end result is the same – all data from the table TruncatePermissionsTest is removed Assigning the permissions Now comes the most important part of the demonstration – assigning permissions. A permissions matrix can be worked out as under: To apply the security rights, we use the GRANT and DENY clauses, as under: That’s it! We are now ready for our big test! THE TEST (01B_Truncate Table Test Queries.sql) Script Provided at the end of the article. I will now need two separate SSMS connections, one with the login AllowedTruncate and the other with the login RestrictedTruncate. Running the test is simple; all that’s required is to run through the script – 01B_Truncate Table Test Queries.sql. What I will demonstrate here via screen-shots is the behavior of SQL Server when logged in as the AllowedTruncate user. There are a few other combinations than what are highlighted here. I will leave the reader the right to explore the behavior of the RestrictedTruncate user and these additional scenarios, as a form of self-study. 1.       Testing SELECT permissions 2.       Testing TRUNCATE permissions (Remember, “deny by default”?) 3.       Trying to circumvent security by trying to TRUNCATE the table using the stored procedure Hence, we have now proved that a user can indeed be assigned permissions to specifically assign TRUNCATE permissions. I also hope that the above has sparked curiosity towards putting some security around the probably “destructive” operations of DELETE and TRUNCATE. I would like to wish each and every one of the readers a very happy and secure time with Microsoft SQL Server. (Please find the scripts – 01A_Truncate Table Permissions.sql and 01B_Truncate Table Test Queries.sql that have been used in this demonstration. Please note that these scripts contain purely test-level code only. These scripts must not, at any cost, be used in the reader’s production environments). 01A_Truncate Table Permissions.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Run through, step-by-step through the sequence till Step 08 to create a test database 2. Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows, one where you have logged in as 'RestrictedTruncate', and the other as 'AllowedTruncate' 3. Come back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 13, 2010 - NAV - Updated to add a security matrix and improve code readability when applying security December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 01: Create a new test database CREATE DATABASE TruncateTestDB GO USE TruncateTestDB GO -- Step 02: Add roles and users to demonstrate the security of the Truncate operation -- 2a. Create the new roles CREATE ROLE AllowedTruncateRole; GO CREATE ROLE RestrictedTruncateRole; GO -- 2b. Create new logins CREATE LOGIN AllowedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO CREATE LOGIN RestrictedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO -- 2c. Create new Users using the roles and logins created aboave CREATE USER TruncateUser FOR LOGIN AllowedTruncate WITH DEFAULT_SCHEMA = dbo GO CREATE USER NoTruncateUser FOR LOGIN RestrictedTruncate WITH DEFAULT_SCHEMA = dbo GO -- 2d. Add the newly created login to the newly created role sp_addrolemember 'AllowedTruncateRole','TruncateUser' GO sp_addrolemember 'RestrictedTruncateRole','NoTruncateUser' GO -- Step 03: Change over to the test database USE TruncateTestDB GO -- Step 04: Create a test table within the test databse CREATE TABLE TruncatePermissionsTest (Id INT IDENTITY(1,1), Name NVARCHAR(50)) GO -- Step 05: Populate the required data INSERT INTO TruncatePermissionsTest VALUES (N'Delhi'), (N'Mumbai'), (N'Ahmedabad') GO -- Step 06: Encapsulate the DELETE within another module CREATE PROCEDURE proc_DeleteMyTable WITH EXECUTE AS SELF AS DELETE FROM TruncateTestDB..TruncatePermissionsTest GO -- Step 07: Encapsulate the TRUNCATE within another module CREATE PROCEDURE proc_TruncateMyTable WITH EXECUTE AS SELF AS TRUNCATE TABLE TruncateTestDB..TruncatePermissionsTest GO -- Step 08: Apply Security /* *****************************SECURITY MATRIX*************************************** =================================================================================== Object                   | Permissions |                 Login |             | AllowedTruncate   |   RestrictedTruncate |             |User:NoTruncateUser|   User:TruncateUser =================================================================================== TruncatePermissionsTest  | SELECT,     |      GRANT        |      (Default) | INSERT,     |                   | | UPDATE,     |                   | | DELETE      |                   | -------------------------+-------------+-------------------+----------------------- TruncatePermissionsTest  | ALTER       |      DENY         |      (Default) -------------------------+-------------+----*/----------------+----------------------- proc_DeleteMyTable | EXECUTE | GRANT | DENY -------------------------+-------------+-------------------+----------------------- proc_TruncateMyTable | EXECUTE | DENY | GRANT -------------------------+-------------+-------------------+----------------------- *****************************SECURITY MATRIX*************************************** */ /* Table: TruncatePermissionsTest*/ GRANT SELECT, INSERT, UPDATE, DELETE ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO DENY ALTER ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO /* Procedure: proc_DeleteMyTable*/ GRANT EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO NoTruncateUser GO DENY EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO TruncateUser GO /* Procedure: proc_TruncateMyTable*/ DENY EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO NoTruncateUser GO GRANT EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO TruncateUser GO -- Step 09: Test --Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows: --    1. one where you have logged in as 'RestrictedTruncate', and --    2. the other as 'AllowedTruncate' -- Step 10: Cleanup sp_droprolemember 'AllowedTruncateRole','TruncateUser' GO sp_droprolemember 'RestrictedTruncateRole','NoTruncateUser' GO DROP USER TruncateUser GO DROP USER NoTruncateUser GO DROP LOGIN AllowedTruncate GO DROP LOGIN RestrictedTruncate GO DROP ROLE AllowedTruncateRole GO DROP ROLE RestrictedTruncateRole GO USE MASTER GO DROP DATABASE TruncateTestDB GO 01B_Truncate Table Test Queries.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Switch over to this from "Truncate Table Permissions.sql", Step #09 2. Execute this step-by-step in two different SSMS windows a. One where you have logged in as 'RestrictedTruncate', and b. The other as 'AllowedTruncate' 3. Return back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 09A: Switch to the test database USE TruncateTestDB GO -- Step 09B: Ensure that we have valid data SELECT * FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The SELECT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09C: Attempt to Truncate Data from the table without using the stored procedure TRUNCATE TABLE TruncatePermissionsTest GO -- (Expected: Following error will occur) --  Msg 1088, Level 16, State 7, Line 2 --  Cannot find the object "TruncatePermissionsTest" because it does not exist or you do not have permissions. -- Step 09D:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'London'), (N'Paris'), (N'Berlin') GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The INSERT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09E: Attempt to Truncate Data from the table using the stored procedure EXEC proc_TruncateMyTable GO -- (Expected: Will execute successfully with 'AllowedTruncate' user, will error out as under with 'RestrictedTruncate') -- Msg 229, Level 14, State 5, Procedure proc_TruncateMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_TruncateMyTable', database 'TruncateTestDB', schema 'dbo'. -- Step 09F:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Madrid'), (N'Rome'), (N'Athens') GO --Step 09G: Attempt to Delete Data from the table without using the stored procedure DELETE FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 2 -- The DELETE permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. -- Step 09H:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Spain'), (N'Italy'), (N'Greece') GO --Step 09I: Attempt to Delete Data from the table using the stored procedure EXEC proc_DeleteMyTable GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Procedure proc_DeleteMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_DeleteMyTable', database 'TruncateTestDB', schema 'dbo'. --Step 09J: Close this SSMS window and return back to "Truncate Table Permissions.sql" Thank you Nakul to take up the challenge and prove that Ahmedabad and Gandhinagar SQL Server User Group has talent to solve difficult problems. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • The Execute SQL Task

    In this article we are going to take you through the Execute SQL Task in SQL Server Integration Services for SQL Server 2005 (although it appies just as well to SQL Server 2008).  We will be covering all the essentials that you will need to know to effectively use this task and make it as flexible as possible. The things we will be looking at are as follows: A tour of the Task. The properties of the Task. After looking at these introductory topics we will then get into some examples. The examples will show different types of usage for the task: Returning a single value from a SQL query with two input parameters. Returning a rowset from a SQL query. Executing a stored procedure and retrieveing a rowset, a return value, an output parameter value and passing in an input parameter. Passing in the SQL Statement from a variable. Passing in the SQL Statement from a file. Tour Of The Task Before we can start to use the Execute SQL Task in our packages we are going to need to locate it in the toolbox. Let's do that now. Whilst in the Control Flow section of the package expand your toolbox and locate the Execute SQL Task. Below is how we found ours. Now drag the task onto the designer. As you can see from the following image we have a validation error appear telling us that no connection manager has been assigned to the task. This can be easily remedied by creating a connection manager. There are certain types of connection manager that are compatable with this task so we cannot just create any connection manager and these are detailed in a few graphics time. Double click on the task itself to take a look at the custom user interface provided to us for this task. The task will open on the general tab as shown below. Take a bit of time to have a look around here as throughout this article we will be revisting this page many times. Whilst on the general tab, drop down the combobox next to the ConnectionType property. In here you will see the types of connection manager which this task will accept. As with SQL Server 2000 DTS, SSIS allows you to output values from this task in a number of formats. Have a look at the combobox next to the Resultset property. The major difference here is the ability to output into XML. If you drop down the combobox next to the SQLSourceType property you will see the ways in which you can pass a SQL Statement into the task itself. We will have examples of each of these later on but certainly when we saw these for the first time we were very excited. Next to the SQLStatement property if you click in the empty box next to it you will see ellipses appear. Click on them and you will see the very basic query editor that becomes available to you. Alternatively after you have specified a connection manager for the task you can click on the Build Query button to bring up a completely different query editor. This is slightly inconsistent. Once you've finished looking around the general tab, move on to the next tab which is the parameter mapping tab. We shall, again, be visiting this tab throughout the article but to give you an initial heads up this is where you define the input, output and return values from your task. Note this is not where you specify the resultset. If however you now move on to the ResultSet tab this is where you define what variable will receive the output from your SQL Statement in whatever form that is. Property Expressions are one of the most amazing things to happen in SSIS and they will not be covered here as they deserve a whole article to themselves. Watch out for this as their usefulness will astound you. For a more detailed discussion of what should be the parameter markers in the SQL Statements on the General tab and how to map them to variables on the Parameter Mapping tab see Working with Parameters and Return Codes in the Execute SQL Task. Task Properties There are two places where you can specify the properties for your task. One is in the task UI itself and the other is in the property pane which will appear if you right click on your task and select Properties from the context menu. We will be doing plenty of property setting in the UI later so let's take a moment to have a look at the property pane. Below is a graphic showing our properties pane. Now we shall take you through all the properties and tell you exactly what they mean. A lot of these properties you will see across all tasks as well as the package because of everything's base structure The Container. BypassPrepare Should the statement be prepared before sending to the connection manager destination (True/False) Connection This is simply the name of the connection manager that the task will use. We can get this from the connection manager tray at the bottom of the package. DelayValidation Really interesting property and it tells the task to not validate until it actually executes. A usage for this may be that you are operating on table yet to be created but at runtime you know the table will be there. Description Very simply the description of your Task. Disable Should the task be enabled or not? You can also set this through a context menu by right clicking on the task itself. DisableEventHandlers As a result of events that happen in the task, should the event handlers for the container fire? ExecValueVariable The variable assigned here will get or set the execution value of the task. Expressions Expressions as we mentioned earlier are a really powerful tool in SSIS and this graphic below shows us a small peek of what you can do. We select a property on the left and assign an expression to the value of that property on the right causing the value to be dynamically changed at runtime. One of the most obvious uses of this is that the property value can be built dynamically from within the package allowing you a great deal of flexibility FailPackageOnFailure If this task fails does the package? FailParentOnFailure If this task fails does the parent container? A task can he hosted inside another container i.e. the For Each Loop Container and this would then be the parent. ForcedExecutionValue This property allows you to hard code an execution value for the task. ForcedExecutionValueType What is the datatype of the ForcedExecutionValue? ForceExecutionResult Force the task to return a certain execution result. This could then be used by the workflow constraints. Possible values are None, Success, Failure and Completion. ForceExecutionValue Should we force the execution result? IsolationLevel This is the transaction isolation level of the task. IsStoredProcedure Certain optimisations are made by the task if it knows that the query is a Stored Procedure invocation. The docs say this will always be false unless the connection is an ADO connection. LocaleID Gets or sets the LocaleID of the container. LoggingMode Should we log for this container and what settings should we use? The value choices are UseParentSetting, Enabled and Disabled. MaximumErrorCount How many times can the task fail before we call it a day? Name Very simply the name of the task. ResultSetType How do you want the results of your query returned? The choices are ResultSetType_None, ResultSetType_SingleRow, ResultSetType_Rowset and ResultSetType_XML. SqlStatementSource Your Query/SQL Statement. SqlStatementSourceType The method of specifying the query. Your choices here are DirectInput, FileConnection and Variables TimeOut How long should the task wait to receive results? TransactionOption How should the task handle being asked to join a transaction? Usage Examples As we move through the examples we will only cover in them what we think you must know and what we think you should see. This means that some of the more elementary steps like setting up variables will be covered in the early examples but skipped and simply referred to in later ones. All these examples used the AventureWorks database that comes with SQL Server 2005. Returning a Single Value, Passing in Two Input Parameters So the first thing we are going to do is add some variables to our package. The graphic below shows us those variables having been defined. Here the CountOfEmployees variable will be used as the output from the query and EndDate and StartDate will be used as input parameters. As you can see all these variables have been scoped to the package. Scoping allows us to have domains for variables. Each container has a scope and remember a package is a container as well. Variable values of the parent container can be seen in child containers but cannot be passed back up to the parent from a child. Our following graphic has had a number of changes made. The first of those changes is that we have created and assigned an OLEDB connection manager to this Task ExecuteSQL Task Connection. The next thing is we have made sure that the SQLSourceType property is set to Direct Input as we will be writing in our statement ourselves. We have also specified that only a single row will be returned from this query. The expressions we typed in was: SELECT COUNT(*) AS CountOfEmployees FROM HumanResources.Employee WHERE (HireDate BETWEEN ? AND ?) Moving on now to the Parameter Mapping tab this is where we are going to tell the task about our input paramaters. We Add them to the window specifying their direction and datatype. A quick word here about the structure of the variable name. As you can see SSIS has preceeded the variable with the word user. This is a default namespace for variables but you can create your own. When defining your variables if you look at the variables window title bar you will see some icons. If you hover over the last one on the right you will see it says "Choose Variable Columns". If you click the button you will see a list of checkbox options and one of them is namespace. after checking this you will see now where you can define your own namespace. The next tab, result set, is where we need to get back the value(s) returned from our statement and assign to a variable which in our case is CountOfEmployees so we can use it later perhaps. Because we are only returning a single value then if you remember from earlier we are allowed to assign a name to the resultset but it must be the name of the column (or alias) from the query. A really cool feature of Business Intelligence Studio being hosted by Visual Studio is that we get breakpoint support for free. In our package we set a Breakpoint so we can break the package and have a look in a watch window at the variable values as they appear to our task and what the variable value of our resultset is after the task has done the assignment. Here's that window now. As you can see the count of employess that matched the data range was 2. Returning a Rowset In this example we are going to return a resultset back to a variable after the task has executed not just a single row single value. There are no input parameters required so the variables window is nice and straight forward. One variable of type object. Here is the statement that will form the soure for our Resultset. select p.ProductNumber, p.name, pc.Name as ProductCategoryNameFROM Production.ProductCategory pcJOIN Production.ProductSubCategory pscON pc.ProductCategoryID = psc.ProductCategoryIDJOIN Production.Product pON psc.ProductSubCategoryID = p.ProductSubCategoryID We need to make sure that we have selected Full result set as the ResultSet as shown below on the task's General tab. Because there are no input parameters we can skip the parameter mapping tab and move straight to the Result Set tab. Here we need to Add our variable defined earlier and map it to the result name of 0 (remember we covered this earlier) Once we run the task we can again set a breakpoint and have a look at the values coming back from the task. In the following graphic you can see the result set returned to us as a COM object. We can do some pretty interesting things with this COM object and in later articles that is exactly what we shall be doing. Return Values, Input/Output Parameters and Returning a Rowset from a Stored Procedure This example is pretty much going to give us a taste of everything. We have already covered in the previous example how to specify the ResultSet to be a Full result set so we will not cover it again here. For this example we are going to need 4 variables. One for the return value, one for the input parameter, one for the output parameter and one for the result set. Here is the statement we want to execute. Note how much cleaner it is than if you wanted to do it using the current version of DTS. In the Parameter Mapping tab we are going to Add our variables and specify their direction and datatypes. In the Result Set tab we can now map our final variable to the rowset returned from the stored procedure. It really is as simple as that and we were amazed at how much easier it is than in DTS 2000. Passing in the SQL Statement from a Variable SSIS as we have mentioned is hugely more flexible than its predecessor and one of the things you will notice when moving around the tasks and the adapters is that a lot of them accept a variable as an input for something they need. The ExecuteSQL task is no different. It will allow us to pass in a string variable as the SQL Statement. This variable value could have been set earlier on from inside the package or it could have been populated from outside using a configuration. The ResultSet property is set to single row and we'll show you why in a second when we look at the variables. Note also the SQLSourceType property. Here's the General Tab again. Looking at the variable we have in this package you can see we have only two. One for the return value from the statement and one which is obviously for the statement itself. Again we need to map the Result name to our variable and this can be a named Result Name (The column name or alias returned by the query) and not 0. The expected result into our variable should be the amount of rows in the Person.Contact table and if we look in the watch window we see that it is.   Passing in the SQL Statement from a File The final example we are going to show is a really interesting one. We are going to pass in the SQL statement to the task by using a file connection manager. The file itself contains the statement to run. The first thing we are going to need to do is create our file connection mananger to point to our file. Click in the connections tray at the bottom of the designer, right click and choose "New File Connection" As you can see in the graphic below we have chosen to use an existing file and have passed in the name as well. Have a look around at the other "Usage Type" values available whilst you are here. Having set that up we can now see in the connection manager tray our file connection manager sitting alongside our OLE-DB connection we have been using for the rest of these examples. Now we can go back to the familiar General Tab to set up how the task will accept our file connection as the source. All the other properties in this task are set up exactly as we have been doing for other examples depending on the options chosen so we will not cover them again here.   We hope you will agree that the Execute SQL Task has changed considerably in this release from its DTS predecessor. It has a lot of options available but once you have configured it a few times you get to learn what needs to go where. We hope you have found this article useful.

    Read the article

  • SQL Developer Q&A from ODTUG Tips & Tricks Webcast

    - by thatjeffsmith
    Another great webcast yesterday – if you’re a paying member of ODTUG you can watch the show for yourself in their archives. If not, you can get my slide deck off of SlideShare. About 150 of you brave souls sat through an entire hour of me talking and then 10 more minutes of Q&A. We went through everything rapid-fire style, so I thought I would post the questions and my refined answers here for your perusal. In the order in which I received them: You showed the preference to choose between resultsets in same tab or ain a new tab. I understand that we can not have it both using different hotkeys? For example: F5 run and resultset to same tab, ctrl-f5 same but to new tab? Sometimes you want the one other times the other. The questioner is asking about this preference, Tools Preferences Database Worksheet ‘Show query results in new tabs.’ This is an all or nothing proposition. But, there’s another, perhaps better way: the document PINs. If you have a result set you don’t want to lose, ‘pin it.’ Pin multiple result sets or plans for review and comparisons. You mentioned that sometimes it’s hard to remember where a certain preference is. I agree. So enhancement request: add a search-box to the preferences window. Maybe like in, for example, UltraEdit. It shows you all preferences containing your search criteria. Actually, we do have a search mechanism type the search string, we auto-filter the preferences Is there a version of SQL Developer that will connect to an 8i database (Yes, I realize how old that database version is!) Sorry, no. We also don’t have a version that will run on Windows 3.11 for Workgroups…probably. How do we access your blog? Carefully, and with much trepidation. When you’re ready, go to http://www.thatjeffsmith.com Is there a way to get good formatting with predefined settings? I believe the questioner is referring to the script output a la SQL*Plus formatting commands. Yes, there is. You can build your formatting commands into your login.sql script, and those will be applied for your script execution sessions. Example here. Why this version 4.0 doesn’t support external plugins? It does, it just requires the plugin developer to re-factor it for OSGi. This came about when we updated the JDeveloper framework to the later 11g/12c stuff. Any change in hookup with SVN? The only change with Subversion is that internally we’re using 1.7 stuff now. You can use SQLDev to work with a 1.8 SVN server, but if you get a working copy with a 1.8 client SQLDev won’t be able to do anything with it… Command line utilities ? improvements Yes! The long answer is here. Is that a Hint or a Comment?? /*CSV*/ It’s a comment – the database won’t recognize it, but SQLDev does when it goes through our statement pre-processor. We’ll redirect the output through our CSV formatter before displaying the results in the Script Output panel. That’s why this will ONLY work in SQL Developer. Are you selecting “”Run Script”" to get that CSV or HTML output, rather than “”Run Statement”"? Yes, the formatter hints like the CSV one mentioned above only make sense in a script output panel vs a grid. How do you save relational models once they’re defined? I’ve had trouble with setting one up, “”saving”" it, then the design work I did is longer there when loading it later. File – Data Modeler – Save. If you’re running the Modeler inside of SQL Developer, the menu’ing interface can get a bit tricky. That’s why I recommend using the stand along if you’re doing anything with a model that takes more than 5 minutes. See how the Data Modeler menus are folded up under the SQL Dev menus? Can u unplug and plug into another container in a database with only sqldeveloper? Yes, you can ‘Detach’ a multitentant 12c Database ‘pluggable’ and plug it into another instance. You have the option to copy or move the files. This isn’t a trivial operation, pay attention Can you run APEX code directly on the adopter? No, at least not as I understand your question. Give me an example and I can give you a better example. Is there a way that when u click on a particular table it wouldn’t show the table with the info but just to see the columns underneath clicking on the node? Yes, another one of my tips! Disable Tools Preferences Datbase ObjectViewer ‘Open Object on Single Click.’ Is there a patch to allow a double click on a procedure on an open package body to take you to that procedure in the editor? This has been fixed for EA3 – to be released soon. Can you open the spec with the body? You can open the spec or the body, and then also open the other. But you can’t open both with a single click. So if you want you can set it to CSV but can you also see it as a regular result set in rows and then click in the results to export to excel? If you run your query as a statement with Ctrl-Enter, you can send the data to Excel via the Export dialog. Will it do intellisense like using the alias and pop up the column, object names? Yes! You can select more than one column… Can a DBA turn off items from a high level for users so the only thing they can perform would be selects? A DBA should turn things ON, not OFF. Create a user with only CONNECT and required SELECT privs and you’re good to go, regardless of which application they are using. I use PL/SQL Developer from allround automations and was SQL Developer illiterate and now I like this for myself as a DBA. Now I get to train developers on this tool since they have been asking how to use this tool. Thank you. No, THANK YOU! Can you run multi queries in the worksheet after you added it to the worksheet? Yes, highlight what you want to run, and hit Ctrl-Enter. Can you export the result sets to excel, etc. Yes. In version 4.0 and going forward, I recommend you use the XLSX option for exports. It will run faster and consume much, much less memory. Will this be available after the webinar? If you are a ODTUG member, check out the webinar recordings in the archives. That’s worth the $99 right there. Ask your boss if they have $99 in their training budget for you. If not, maybe time to look for another job? Can you run command lines from this tool? Like executes without issuing a command line prompt? Ok, I’m stumped on this one. Not sure what you’re asking. You can setup external tools under the Tools menu, and from there you could probably rig what you’re looking for, but I’m not sure what you’re looking for… This maybe?Where and when to put the program Is there any way to save a copy database command set (certain tables/views etc) in a script? Yes! Create a cart with the objects you want to be used in the Copy. Then use the new command-line interface to kick off SQL Developer to do the copy of those said objects. How can we export the preference and then import them into different or same version of SQL Developer ? Today, there’s no interface for this. But you could copy the files around manually…Kris Rice has a cool idea where you can set your preferences to be saved to your local drop box folder and then you can use SQL Developer from anywhere with the same preferences What happens to SQL*Plus commands like COL & BREAK Nothing. Those are not currently supported. Is there a place where all “”hotkey”" functionality is listed? thanks Yes. Tools – Preferences – Shortcut Keys. And you can change them! Any tips for the DBA side of things? will the SQL generated for objects have more information (e.g. user privileges) in v4? You can get this now. In Tools – Preferences – Database – Utilities – Export, check ‘Grants.’ Voila! You now have the code necessary to recreate your object privileges Is there a limit on the number of rows that could be imported / exported from/to excel ? The only hard-coded limit lies in Excel. For best performance, use v4 and XLSX formats for Exports. Is there a way to see/watch active sessions to see current SQL and the explain plan being used, etc. Kind of like that frog product. Cough, yes. Tools – Monitor Sessions. Click on session, see SQL and plan. The plan was added in v4. If you’re not in version 4, use the Reports – Active Sessions to get the plans. In the DBA section is there a way to manage say tablespaces to add data files, shrink, edit profiles, etc. Yes, we support all of that. View – DBA. Connect, go to the Storage node. Are you (Jeff) available for a live presentation at our Oracle User Group here in Indiana? Maybe. Email me and we’ll see, [email protected] Where do I go to download sql developer 4.0? The Internet of course! Can you directly edit query results? Nope. But what I think you’re asking is, can I edit the data in the tables that are reflected in my query results? You can change the query results by changing your query of course. Or this. Can you show html example? Sure. I’d embed the HTML here, but it’s a lot of code, try it for yourself! How can I quickly close many SQL worksheet windows, but not all? Window – Documents. Multi-select, hit the ‘Close Document(s)’ button. What does the vertical red line denote? That’s the margin. Tells you when you’ve typed too far and it’s time for a carriage return. Did DBA/Database Status/Instance Viewer make it officially into 4.0? It was sort-of included in the first EA. I have NO idea what you’re talking about, WINK-WINK. No, it’s not in v4.0. Is there a “”handy”" way to debug trigger code? Yes, open your trigger. Hit the debug button. Works great as long as it’s a DML trigger. Will you make your presentation file available for us ( in PPT and/or PDF format ) ? It’s on SlideShare. How do you get SqlDeveloper to escape ‘ correctly when you use the wizard to export data as insert statements? If it’s not doing that, it’s a bug. I’ll take a look at that scenario ASAP.

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • Dynamic XAP loading in Task-It - Part 1

    Download Source Code NOTE 1: The source code provided is running against the RC versions of Silverlight 4 and VisualStudio 2010, so you will need to update to those bits to run it. NOTE 2: After downloading the source, be sure to set the .Web project as the StartUp Project, and Default.aspx as the Start Page In my MEF into post, MEF to the rescue in Task-It, I outlined a couple of issues I was facing and explained why I chose MEF (the Managed Extensibility Framework) to solve these issues. Other posts to check out There are a few other resources out there around dynamic XAP loading that you may want to review (by the way, Glenn Block is the main dude when it comes to MEF): Glenn Blocks 3-part series on a dynamically loaded dashboard Glenn and John Papas Silverlight TV video on dynamic xap loading These provide some great info, but didnt exactly cover the scenario I wanted to achieve in Task-Itand that is dynamically loading each of the apps pages the first time the user enters a page. The code In the code I provided for download above, I created a simple solution that shows the technique I used for dynamic XAP loading in Task-It, but without all of the other code that surrounds it. Taking all that other stuff away should make it easier to grasp. Having said that, there is still a fair amount of code involved. I am always looking for ways to make things simpler, and to achieve the desired result with as little code as possible, so if I find a better/simpler way I will blog about it, but for now this technique works for me. When I created this solution I started by creating a new Silverlight Navigation Application called DynamicXAP Loading. I then added the following line to my UriMappings in MainPage.xaml: <uriMapper:UriMapping Uri="/{assemblyName};component/{path}" MappedUri="/{assemblyName};component/{path}"/> In the section of MainPage.xaml that produces the page links in the upper right, I kept the Home link, but added a couple of new ones (page1 and page 2). These are the pages that will be dynamically (lazy) loaded: <StackPanel x:Name="LinksStackPanel" Style="{StaticResource LinksStackPanelStyle}">      <HyperlinkButton Style="{StaticResource LinkStyle}" NavigateUri="/Home" TargetName="ContentFrame" Content="home"/>      <Rectangle Style="{StaticResource DividerStyle}"/>      <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 1" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage1}"/>      <Rectangle Style="{StaticResource DividerStyle}"/>      <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 2" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage2}"/>  </StackPanel> In App.xaml.cs I added a bit of MEF code. In Application_Startup I call a method called InitializeContainer, which creates a PackageCatalog (a MEF thing), then I create a CompositionContainer and pass it to the CompositionHost.Initialize method. This is boiler-plate MEF stuff that allows you to do 'composition' and import 'packages'. You're welcome to do a bit more MEF research on what is happening here if you'd like, but for the purpose of this example you can just trust that it works. :-) private void Application_Startup(object sender, StartupEventArgs e) {     InitializeContainer();     this.RootVisual = new MainPage(); }   private static void InitializeContainer() {     var catalog = new PackageCatalog();     catalog.AddPackage(Package.Current);     var container = new CompositionContainer(catalog);     container.ComposeExportedValue(catalog);     CompositionHost.Initialize(container); } Infrastructure In the sample code you'll notice that there is a project in the solution called DynamicXAPLoading.Infrastructure. This is simply a Silverlight Class Library project that I created just to move stuff I considered application 'infrastructure' code into a separate place, rather than cluttering the main Silverlight project (DynamicXapLoading). I did this same thing in Task-It, as the amount of this type of code was starting to clutter up the Silverlight project, and it just seemed to make sense to move things like Enums, Constants and the like off to a separate place. In the DynamicXapLoading.Infrastructure project you'll see 3 classes: Enums - There is only one enum in here called ModuleEnum. We'll use these later. PageMetadata - We will use this class later to add metadata to a new dynamically loaded project. ViewModelBase - This is simply a base class for view models that we will use in this, as well as future samples. As mentioned in my MVVM post, I will be using the MVVM pattern throughout my code for reasons detailed in the post. By the way, the ViewModelExtension class in there allows me to do strongly-typed property changed notification, so rather than OnPropertyChanged("MyProperty"), I can do this.OnPropertyChanged(p => p.MyProperty). It's just a less error-prown approach, because if you don't spell "MyProperty" correctly using the first method, nothing will break, it just won't work. Adding a new page We currently have a couple of pages that are being dynamically (lazy) loaded, but now let's add a third page. 1. First, create a new Silverlight Application project: In this example I call it Page3. In the future you may prefer to use a different name, like DynamicXAPLoading.Page3, or even DynamicXAPLoading.Modules.Page3. It can be whatever you want. In my Task-It application I used the latter approach (with 'Modules' in the name). I do think of these application as 'modules', but Prism uses the same term, so some folks may not like that. Use whichever naming convention you feel is appropriate, but for now Page3 will do. When you change the name to Page3 and click OK, you will be presented with the Add New Project dialog: It is important that you leave the 'Host the Silverlight application in a new or existing Web site in the solution' checked, and the .Web project will be selected in the dropdown below. This will create the .xap file for this project under ClientBin in the .Web project, which is where we want it. 2. Uncheck the 'Add a test page that references the application' checkbox, and leave everything else as is. 3. Once the project is created, you can delete App.xaml and MainPage.xaml. 4. You will need to add references your new project to the following: DynamicXAPLoading.Infrastructure.dll (this is a Project reference) DynamicNavigation.dll (this is in the Libs directory under the DynamicXAPLoading project) System.ComponentModel.Composition.dll System.ComponentModel.Composition.Initialization.dll System.Windows.Controls.Navigation.dll If you have installed the latest RC bits you will find the last 3 dll's under the .NET tab in the Add Referenced dialog. They live in the following location, or if you are on a 64-bit machine like me, it will be Program Files (x86).       C:\Program Files\Microsoft SDKs\Silverlight\v4.0\Libraries\Client Now let's create some UI for our new project. 5. First, create a new Silverlight User Control called Page3.dyn.xaml 6. Paste the following code into the xaml: <dyn:DynamicPageShim xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:dyn="clr-namespace:DynamicNavigation;assembly=DynamicNavigation"     xmlns:my="clr-namespace:Page3;assembly=Page3">     <my:Page3Host /> </dyn:DynamicPageShim> This is just a 'shim', part of David Poll's technique for dynamic loading. 7. Expand the icon next to Page3.dyn.xaml and delete the code-behind file (Page3.dyn.xaml.cs). 8. Next we will create a control that will 'host' our page. Create another Silverlight User Control called Page3Host.xaml and paste in the following XAML: <dyn:DynamicPage x:Class="Page3.Page3Host"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:dyn="clr-namespace:DynamicNavigation;assembly=DynamicNavigation"     xmlns:Views="clr-namespace:Page3.Views"      mc:Ignorable="d"     d:DesignHeight="300" d:DesignWidth="400"     Title="Page 3">       <Views:Page3/>   </dyn:DynamicPage> 9. Now paste the following code into the code-behind for this control: using DynamicXAPLoading.Infrastructure;   namespace Page3 {     [PageMetadata(NavigateUri = "/Page3;component/Page3.dyn.xaml", Module = Enums.Page3)]     public partial class Page3Host     {         public Page3Host()         {             InitializeComponent();         }     } } Notice that we are now using that PageMetadata custom attribute class that we created in the Infrastructure project, and setting its two properties. NavigateUri - This tells it that the assembly is called Page3 (with a slash beforehand), and the page we want to load is Page3.dyn.xaml...our 'shim'. That line we added to the UriMapper in MainPage.xaml will use this information to load the page. Module - This goes back to that ModuleEnum class in our Infrastructure project. However, setting the Module to ModuleEnum.Page3 will cause a compilation error, so... 10. Go back to that Enums.cs under the Infrastructure project and add a 3rd entry for Page3: public enum ModuleEnum {     Page1,     Page2,     Page3 } 11. Now right-click on the Page3 project and add a folder called Views. 12. Right-click on the Views folder and create a new Silverlight User Control called Page3.xaml. We won't bother creating a view model for this User Control as I did in the Page 1 and Page 2 projects, just for the sake of simplicity. Feel free to add one if you'd like though, and copy the code from one of those other projects. Right now those view models aren't really doing anything anyway...though they will in my next post. :-) 13. Now let's replace the xaml for Page3.xaml with the following: <dyn:DynamicPage x:Class="Page3.Views.Page3"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:dyn="clr-namespace:DynamicNavigation;assembly=DynamicNavigation"     mc:Ignorable="d"     d:DesignHeight="300" d:DesignWidth="400"     Style="{StaticResource PageStyle}">       <Grid x:Name="LayoutRoot">         <ScrollViewer x:Name="PageScrollViewer" Style="{StaticResource PageScrollViewerStyle}">             <StackPanel x:Name="ContentStackPanel">                 <TextBlock x:Name="HeaderText" Style="{StaticResource HeaderTextStyle}" Text="Page 3"/>                 <TextBlock x:Name="ContentText" Style="{StaticResource ContentTextStyle}" Text="Page 3 content"/>             </StackPanel>         </ScrollViewer>     </Grid>   </dyn:DynamicPage> 14. And in the code-behind remove the inheritance from UserControl, so it should look like this: namespace Page3.Views {     public partial class Page3     {         public Page3()         {             InitializeComponent();         }     } } One thing you may have noticed is that the base class for the last two User Controls we created is DynamicPage. Once again, we are using the infrastructure that David Poll created. 15. OK, a few last things. We need a link on our main page so that we can access our new page. In MainPage.xaml let's update our links to look like this: <StackPanel x:Name="LinksStackPanel" Style="{StaticResource LinksStackPanelStyle}">     <HyperlinkButton Style="{StaticResource LinkStyle}" NavigateUri="/Home" TargetName="ContentFrame" Content="home"/>     <Rectangle Style="{StaticResource DividerStyle}"/>     <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 1" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage1}"/>     <Rectangle Style="{StaticResource DividerStyle}"/>     <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 2" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage2}"/>     <Rectangle Style="{StaticResource DividerStyle}"/>     <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 3" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage3}"/> </StackPanel> 16. Next, we need to add the following at the bottom of MainPageViewModel in the ViewModels directory of our DynamicXAPLoading project: public ModuleEnum ModulePage3 {     get { return ModuleEnum.Page3; } } 17. And at last, we need to add a case for our new page to the switch statement in MainPageViewModel: switch (module) {     case ModuleEnum.Page1:         DownloadPackage("Page1.xap");         break;     case ModuleEnum.Page2:         DownloadPackage("Page2.xap");         break;     case ModuleEnum.Page3:         DownloadPackage("Page3.xap");         break;     default:         break; } Now fire up the application and click the page 1, page 2 and page 3 links. What you'll notice is that there is a 2-second delay the first time you hit each page. That is because I added the following line to the Navigate method in MainPageViewModel: Thread.Sleep(2000); // Simulate a 2 second initial loading delay The reason I put this in there is that I wanted to simulate a delay the first time the page loads (as the .xap is being downloaded from the server). You'll notice that after the first hit to the page though that there is no delay...that's because the .xap has already been downloaded. Feel free to comment out this 2-second delay, or remove it if you'd like. I just wanted to show how subsequent hits to the page would be quicker than the initial one. By the way, you may want to display some sort of BusyIndicator while the .xap is loading. I have that in my Task-It appplication, but for the sake of simplicity I did not include it here. In the future I'll blog about how I show and hide the BusyIndicator using events (I'm currently using the eventing framework in Prism for that, but may move to the one in the MVVM Light Toolkit some time soon). Whew, that felt like a lot of steps, but it does work quite nicely. As I mentioned earlier, I'll try to find ways to simplify the code (I'd like to get away from having things like hard-coded .xap file names) and will blog about it in the future if I find a better way. In my next post, I'll talk more about what is actually happening with the code that makes this all work.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • MVVM in Task-It

    As I'm gearing up to write a post about dynamic XAP loading with MEF, I'd like to first talk a bit about MVVM, the Model-View-ViewModel pattern, as I will be leveraging this pattern in my future posts. Download Source Code Why MVVM? Your first question may be, "why do I need this pattern? I've been using a code-behind approach for years and it works fine." Well, you really don't have to make the switch to MVVM, but let me first explain some of the benefits I see for doing so. MVVM Benefits Testability - This is the one you'll probably hear the most about when it comes to MVVM. Moving most of the code from your code-behind to a separate view model class means you can now write unit tests against the view model without any knowledge of a view (UserControl). Multiple UIs - Let's just say that you've created a killer app, it's running in the browser, and maybe you've even made it run out-of-browser. Now what if your boss comes to you and says, "I heard about this new Windows Phone 7 device that is coming out later this year. Can you start porting the app to that device?". Well, now you have to create a new UI (UserControls, etc.) because you have a lot less screen real estate to work with. So what do you do, copy all of your existing UserControls, paste them, rename them, and then start changing the code? Hmm, that doesn't sound so good. But wait, if most of the code that makes your browser-based app tick lives in view model classes, now you can create new view (UserControls) for Windows Phone 7 that reference the same view model classes as your browser-based app. Page state - In Silverlight you're at some point going to be faced with the same issue you dealt with for years in ASP.NET, maintaining page state. Let's say a user hits your Products page, does some stuff (filters record, etc.), then leaves the page and comes back later. It would be best if the Products page was in the same state as when they left it right? Well, if you've thrown away your view (UserControl or Page) and moved off to another part of the UI, when you come back to Products you're probably going to re-instantiate your view...which will put it right back in the state it was when it started. Hmm, not good. Well, with a little help from MEF you can store the state in your view model class, MEF will keep that view model instance hanging around in memory, and then you simply rebind your view to the view model class. I made that sound easy, but it's actually a bit of work to properly store and restore the state. At least it can be done though, which will make your users a lot happier! I'll talk more about this in an upcoming blog post. No event handlers? Another nice thing about MVVM is that you can bind your UserControls to the view model, which may eliminate the need for event handlers in your code-behind. So instead of having a Click handler on a Button (or RadMenuItem), for example, you can now bind your control's Command property to a DelegateCommand in your view model (I'll talk more about Commands in an upcoming post). Instead of having a SelectionChanged event handler on your RadGridView you can now bind its SelectedItem property to a property in your view model, and each time the user clicks a row, the view model property's setter will be called. Now through the magic of binding we can eliminate the need for traditional code-behind based event handlers on our user interface controls, and the best thing is that the view model knows about everything that's going on...which means we can test things without a user interface. The brains of the operation So what we're seeing here is that the view is now just a dumb layer that binds to the view model, and that the view model is in control of just about everything, like what happens when a RadGridView row is selected, or when a RadComboBoxItem is selected, or when a RadMenuItem is clicked. It is also responsible for loading data when the page is hit, as well as kicking off data inserts, updates and deletions. Once again, all of this stuff can be tested without the need for a user interface. If the test works, then it'll work regardless of whether the user is hitting the browser-based version of your app, or the Windows Phone 7 version. Nice! The database Before running the code for this app you will need to create the database. First, create a database called MVVMProject in SQL Server, then run MVVMProject.sql in the MVVMProject/Database directory of your downloaded .zip file. This should give you a Task table with 3 records in it. When you fire up the solution you will also need to update the connection string in web.config to point to your database instead of IBM12\SQLSERVER2008. The code One note about this code is that it runs against the latest Silverlight 4 RC and WCF RIA Services code. Please see my first blog post about updating to the RC bits. Beta to RC - Part 1 At the top of this post is a link to a sample project that demonstrates a sample application with a Tasks page that uses the MVVM pattern. This is a simplified version of how I have implemented the Tasks page in the Task-It application. Youll notice that Tasks.xaml has very little code to it. Just a TextBlock that displays the page title and a ContentControl. <StackPanel>     <TextBlock Text="Tasks" Style="{StaticResource PageTitleStyle}"/>     <Rectangle Style="{StaticResource StandardSpacerStyle}"/>     <ContentControl x:Name="ContentControl1"/> </StackPanel> In List.xaml we have a RadGridView. Notice that the ItemsSource is bound to a property in the view model class call Tasks, SelectedItem is bound to a property in the view model called SelectedItem, and IsBusy is bound to a property in the view model called IsLoading. <Grid>     <telerikGridView:RadGridView ItemsSource="{Binding Tasks}" SelectedItem="{Binding SelectedItem, Mode=TwoWay}"                                  IsBusy="{Binding IsLoading}" AutoGenerateColumns="False" IsReadOnly="True" RowIndicatorVisibility="Collapsed"                IsFilteringAllowed="False" ShowGroupPanel="False">         <telerikGridView:RadGridView.Columns>             <telerikGridView:GridViewDataColumn Header="Name" DataMemberBinding="{Binding Name}" Width="3*"/>             <telerikGridView:GridViewDataColumn Header="Due" DataMemberBinding="{Binding DueDate}" DataFormatString="{}{0:d}" Width="*"/>         </telerikGridView:RadGridView.Columns>     </telerikGridView:RadGridView> </Grid> In Details.xaml we have a Save button that is bound to a property called SaveCommand in our view model. We also have a simple form (Im using a couple of controls here from Silverlight.FX for the form layout, FormPanel and Label simply because they make for a clean XAML layout). Notice that the FormPanel is also bound to the SelectedItem in the view model (the same one that the RadGridView is). The two form controls, the TextBox and RadDatePicker) are bound to the SelectedItem's Name and DueDate properties. These are properties of the Task object that WCF RIA Services creates. <StackPanel>     <Button Content="Save" Command="{Binding SaveCommand}" HorizontalAlignment="Left"/>     <Rectangle Style="{StaticResource StandardSpacerStyle}"/>     <fxui:FormPanel DataContext="{Binding SelectedItem}" Style="{StaticResource FormContainerStyle}">         <fxui:Label Text="Name:"/>         <TextBox Text="{Binding Name, Mode=TwoWay}"/>         <fxui:Label Text="Due:"/>         <telerikInput:RadDatePicker SelectedDate="{Binding DueDate, Mode=TwoWay}"/>     </fxui:FormPanel> </StackPanel> In the code-behind of the Tasks control, Tasks.xaml.cs, I created an instance of the view model class (TasksViewModel) in the constructor and set it as the DataContext for the control. The Tasks page will load one of two child UserControls depending on whether you are viewing the list of tasks (List.xaml) or the form for editing a task (Details.xaml). // Set the DataContext to an instance of the view model class var viewModel = new TasksViewModel(); DataContext = viewModel;   // Child user controls (inherit DataContext from this user control) List = new List(); // RadGridView Details = new Details(); // Form When the page first loads, the List is loaded into the ContentControl. // Show the RadGridView first ContentControl1.Content = List; In the code-behind we also listen for a couple of the view models events. The ItemSelected event will be fired when the user clicks on a record in the RadGridView in the List control. The SaveCompleted event will be fired when the user clicks Save in the Details control (the form). Here the view model is in control, and is letting the view know when something needs to change. // Listeners for the view model's events viewModel.ItemSelected += OnItemSelected; viewModel.SaveCompleted += OnSaveCompleted; The event handlers toggle the view between the RadGridView (List) and the form (Details). void OnItemSelected(object sender, RoutedEventArgs e) {     // Show the form     ContentControl1.Content = Details; }   void OnSaveCompleted(object sender, RoutedEventArgs e) {     // Show the RadGridView     ContentControl1.Content = List; } In TasksViewModel, we instantiate a DataContext object and a SaveCommand in the constructor. DataContext is a WCF RIA Services object that well use to retrieve the list of Tasks and to save any changes to a task. Ill talk more about this and Commands in future post, but for now think of the SaveCommand as an event handler that is called when the Save button in the form is clicked. DataContext = new DataContext(); SaveCommand = new DelegateCommand(OnSave); When the TasksViewModel constructor is called we also make a call to LoadTasks. This sets IsLoading to true (which causes the RadGridViews busy indicator to appear) and retrieves the records via WCF RIA Services.         public LoadOperation<Task> LoadTasks()         {             // Show the loading message             IsLoading = true;             // Get the data via WCF RIA Services. When the call has returned, called OnTasksLoaded.             return DataContext.Load(DataContext.GetTasksQuery(), OnTasksLoaded, false);         } When the data is returned, OnTasksLoaded is called. This sets IsLoading to false (which hides the RadGridViews busy indicator), and fires property changed notifications to the UI to let it know that the IsLoading and Tasks properties have changed. This property changed notification basically tells the UI to rebind. void OnTasksLoaded(LoadOperation<Task> lo) {     // Hide the loading message     IsLoading = false;       // Notify the UI that Tasks and IsLoading properties have changed     this.OnPropertyChanged(p => p.Tasks);     this.OnPropertyChanged(p => p.IsLoading); } Next lets look at the view models SelectedItem property. This is the one thats bound to both the RadGridView and the form. When the user clicks a record in the RadGridView its setter gets called (set a breakpoint and see what I mean). The other code in the setter lets the UI know that the SelectedItem has changed (so the form displays the correct data), and fires the event that notifies the UI that a selection has occurred (which tells the UI to switch from List to Details). public Task SelectedItem {     get { return _selectedItem; }     set     {         _selectedItem = value;           // Let the UI know that the SelectedItem has changed (forces it to re-bind)         this.OnPropertyChanged(p => p.SelectedItem);         // Notify the UI, so it can switch to the Details (form) page         NotifyItemSelected();     } } One last thing, saving the data. When the Save button in the form is clicked it fires the SaveCommand, which calls the OnSave method in the view model (once again, set a breakpoint to see it in action). public void OnSave() {     // Save the changes via WCF RIA Services. When the save is complete, call OnSaveCompleted.     DataContext.SubmitChanges(OnSaveCompleted, null); } In OnSave, we tell WCF RIA Services to submit any changes, which there will be if you changed either the Name or the Due Date in the form. When the save is completed, it calls OnSaveCompleted. This method fires a notification back to the UI that the save is completed, which causes the RadGridView (List) to show again. public virtual void OnSaveCompleted(SubmitOperation so) {     // Clear the item that is selected in the grid (in case we want to select it again)     SelectedItem = null;     // Notify the UI, so it can switch back to the List (RadGridView) page     NotifySaveCompleted(); } Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • C#/.NET Little Wonders: ConcurrentBag and BlockingCollection

    - by James Michael Hare
    In the first week of concurrent collections, began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  The last post discussed the ConcurrentDictionary<T> .  Finally this week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see C#/.NET Little Wonders: A Redux. Recap As you'll recall from the previous posts, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  With .NET 4.0, a new breed of collections was born in the System.Collections.Concurrent namespace.  Of these, the final concurrent collection we will examine is the ConcurrentBag and a very useful wrapper class called the BlockingCollection. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this informative whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentBag<T> – Thread-safe unordered collection. Unlike the other concurrent collections, the ConcurrentBag<T> has no non-concurrent counterpart in the .NET collections libraries.  Items can be added and removed from a bag just like any other collection, but unlike the other collections, the items are not maintained in any order.  This makes the bag handy for those cases when all you care about is that the data be consumed eventually, without regard for order of consumption or even fairness – that is, it’s possible new items could be consumed before older items given the right circumstances for a period of time. So why would you ever want a container that can be unfair?  Well, to look at it another way, you can use a ConcurrentQueue and get the fairness, but it comes at a cost in that the ordering rules and synchronization required to maintain that ordering can affect scalability a bit.  Thus sometimes the bag is great when you want the fastest way to get the next item to process, and don’t care what item it is or how long its been waiting. The way that the ConcurrentBag works is to take advantage of the new ThreadLocal<T> type (new in System.Threading for .NET 4.0) so that each thread using the bag has a list local to just that thread.  This means that adding or removing to a thread-local list requires very low synchronization.  The problem comes in where a thread goes to consume an item but it’s local list is empty.  In this case the bag performs “work-stealing” where it will rob an item from another thread that has items in its list.  This requires a higher level of synchronization which adds a bit of overhead to the take operation. So, as you can imagine, this makes the ConcurrentBag good for situations where each thread both produces and consumes items from the bag, but it would be less-than-idea in situations where some threads are dedicated producers and the other threads are dedicated consumers because the work-stealing synchronization would outweigh the thread-local optimization for a thread taking its own items. Like the other concurrent collections, there are some curiosities to keep in mind: IsEmpty(), Count, ToArray(), and GetEnumerator() lock collection Each of these needs to take a snapshot of whole bag to determine if empty, thus they tend to be more expensive and cause Add() and Take() operations to block. ToArray() and GetEnumerator() are static snapshots Because it is based on a snapshot, will not show subsequent updates after snapshot. Add() is lightweight Since adding to the thread-local list, there is very little overhead on Add. TryTake() is lightweight if items in thread-local list As long as items are in the thread-local list, TryTake() is very lightweight, much more so than ConcurrentStack() and ConcurrentQueue(), however if the local thread list is empty, it must steal work from another thread, which is more expensive. Remember, a bag is not ideal for all situations, it is mainly ideal for situations where a process consumes an item and either decomposes it into more items to be processed, or handles the item partially and places it back to be processed again until some point when it will complete.  The main point is that the bag works best when each thread both takes and adds items. For example, we could create a totally contrived example where perhaps we want to see the largest power of a number before it crosses a certain threshold.  Yes, obviously we could easily do this with a log function, but bare with me while I use this contrived example for simplicity. So let’s say we have a work function that will take a Tuple out of a bag, this Tuple will contain two ints.  The first int is the original number, and the second int is the last multiple of that number.  So we could load our bag with the initial values (let’s say we want to know the last multiple of each of 2, 3, 5, and 7 under 100. 1: var bag = new ConcurrentBag<Tuple<int, int>> 2: { 3: Tuple.Create(2, 1), 4: Tuple.Create(3, 1), 5: Tuple.Create(5, 1), 6: Tuple.Create(7, 1) 7: }; Then we can create a method that given the bag, will take out an item, apply the multiplier again, 1: public static void FindHighestPowerUnder(ConcurrentBag<Tuple<int,int>> bag, int threshold) 2: { 3: Tuple<int,int> pair; 4:  5: // while there are items to take, this will prefer local first, then steal if no local 6: while (bag.TryTake(out pair)) 7: { 8: // look at next power 9: var result = Math.Pow(pair.Item1, pair.Item2 + 1); 10:  11: if (result < threshold) 12: { 13: // if smaller than threshold bump power by 1 14: bag.Add(Tuple.Create(pair.Item1, pair.Item2 + 1)); 15: } 16: else 17: { 18: // otherwise, we're done 19: Console.WriteLine("Highest power of {0} under {3} is {0}^{1} = {2}.", 20: pair.Item1, pair.Item2, Math.Pow(pair.Item1, pair.Item2), threshold); 21: } 22: } 23: } Now that we have this, we can load up this method as an Action into our Tasks and run it: 1: // create array of tasks, start all, wait for all 2: var tasks = new[] 3: { 4: new Task(() => FindHighestPowerUnder(bag, 100)), 5: new Task(() => FindHighestPowerUnder(bag, 100)), 6: }; 7:  8: Array.ForEach(tasks, t => t.Start()); 9:  10: Task.WaitAll(tasks); Totally contrived, I know, but keep in mind the main point!  When you have a thread or task that operates on an item, and then puts it back for further consumption – or decomposes an item into further sub-items to be processed – you should consider a ConcurrentBag as the thread-local lists will allow for quick processing.  However, if you need ordering or if your processes are dedicated producers or consumers, this collection is not ideal.  As with anything, you should performance test as your mileage will vary depending on your situation! BlockingCollection<T> – A producers & consumers pattern collection The BlockingCollection<T> can be treated like a collection in its own right, but in reality it adds a producers and consumers paradigm to any collection that implements the interface IProducerConsumerCollection<T>.  If you don’t specify one at the time of construction, it will use a ConcurrentQueue<T> as its underlying store. If you don’t want to use the ConcurrentQueue, the ConcurrentStack and ConcurrentBag also implement the interface (though ConcurrentDictionary does not).  In addition, you are of course free to create your own implementation of the interface. So, for those who don’t remember the producers and consumers classical computer-science problem, the gist of it is that you have one (or more) processes that are creating items (producers) and one (or more) processes that are consuming these items (consumers).  Now, the crux of the problem is that there is a bin (queue) where the produced items are placed, and typically that bin has a limited size.  Thus if a producer creates an item, but there is no space to store it, it must wait until an item is consumed.  Also if a consumer goes to consume an item and none exists, it must wait until an item is produced. The BlockingCollection makes it trivial to implement any standard producers/consumers process set by providing that “bin” where the items can be produced into and consumed from with the appropriate blocking operations.  In addition, you can specify whether the bin should have a limited size or can be (theoretically) unbounded, and you can specify timeouts on the blocking operations. As far as your choice of “bin”, for the most part the ConcurrentQueue is the right choice because it is fairly light and maximizes fairness by ordering items so that they are consumed in the same order they are produced.  You can use the concurrent bag or stack, of course, but your ordering would be random-ish in the case of the former and LIFO in the case of the latter. So let’s look at some of the methods of note in BlockingCollection: BoundedCapacity returns capacity of the “bin” If the bin is unbounded, the capacity is int.MaxValue. Count returns an internally-kept count of items This makes it O(1), but if you modify underlying collection directly (not recommended) it is unreliable. CompleteAdding() is used to cut off further adds. This sets IsAddingCompleted and begins to wind down consumers once empty. IsAddingCompleted is true when producers are “done”. Once you are done producing, should complete the add process to alert consumers. IsCompleted is true when producers are “done” and “bin” is empty. Once you mark the producers done, and all items removed, this will be true. Add() is a blocking add to collection. If bin is full, will wait till space frees up Take() is a blocking remove from collection. If bin is empty, will wait until item is produced or adding is completed. GetConsumingEnumerable() is used to iterate and consume items. Unlike the standard enumerator, this one consumes the items instead of iteration. TryAdd() attempts add but does not block completely If adding would block, returns false instead, can specify TimeSpan to wait before stopping. TryTake() attempts to take but does not block completely Like TryAdd(), if taking would block, returns false instead, can specify TimeSpan to wait. Note the use of CompleteAdding() to signal the BlockingCollection that nothing else should be added.  This means that any attempts to TryAdd() or Add() after marked completed will throw an InvalidOperationException.  In addition, once adding is complete you can still continue to TryTake() and Take() until the bin is empty, and then Take() will throw the InvalidOperationException and TryTake() will return false. So let’s create a simple program to try this out.  Let’s say that you have one process that will be producing items, but a slower consumer process that handles them.  This gives us a chance to peek inside what happens when the bin is bounded (by default, the bin is NOT bounded). 1: var bin = new BlockingCollection<int>(5); Now, we create a method to produce items: 1: public static void ProduceItems(BlockingCollection<int> bin, int numToProduce) 2: { 3: for (int i = 0; i < numToProduce; i++) 4: { 5: // try for 10 ms to add an item 6: while (!bin.TryAdd(i, TimeSpan.FromMilliseconds(10))) 7: { 8: Console.WriteLine("Bin is full, retrying..."); 9: } 10: } 11:  12: // once done producing, call CompleteAdding() 13: Console.WriteLine("Adding is completed."); 14: bin.CompleteAdding(); 15: } And one to consume them: 1: public static void ConsumeItems(BlockingCollection<int> bin) 2: { 3: // This will only be true if CompleteAdding() was called AND the bin is empty. 4: while (!bin.IsCompleted) 5: { 6: int item; 7:  8: if (!bin.TryTake(out item, TimeSpan.FromMilliseconds(10))) 9: { 10: Console.WriteLine("Bin is empty, retrying..."); 11: } 12: else 13: { 14: Console.WriteLine("Consuming item {0}.", item); 15: Thread.Sleep(TimeSpan.FromMilliseconds(20)); 16: } 17: } 18: } Then we can fire them off: 1: // create one producer and two consumers 2: var tasks = new[] 3: { 4: new Task(() => ProduceItems(bin, 20)), 5: new Task(() => ConsumeItems(bin)), 6: new Task(() => ConsumeItems(bin)), 7: }; 8:  9: Array.ForEach(tasks, t => t.Start()); 10:  11: Task.WaitAll(tasks); Notice that the producer is faster than the consumer, thus it should be hitting a full bin often and displaying the message after it times out on TryAdd(). 1: Consuming item 0. 2: Consuming item 1. 3: Bin is full, retrying... 4: Bin is full, retrying... 5: Consuming item 3. 6: Consuming item 2. 7: Bin is full, retrying... 8: Consuming item 4. 9: Consuming item 5. 10: Bin is full, retrying... 11: Consuming item 6. 12: Consuming item 7. 13: Bin is full, retrying... 14: Consuming item 8. 15: Consuming item 9. 16: Bin is full, retrying... 17: Consuming item 10. 18: Consuming item 11. 19: Bin is full, retrying... 20: Consuming item 12. 21: Consuming item 13. 22: Bin is full, retrying... 23: Bin is full, retrying... 24: Consuming item 14. 25: Adding is completed. 26: Consuming item 15. 27: Consuming item 16. 28: Consuming item 17. 29: Consuming item 19. 30: Consuming item 18. Also notice that once CompleteAdding() is called and the bin is empty, the IsCompleted property returns true, and the consumers will exit. Summary The ConcurrentBag is an interesting collection that can be used to optimize concurrency scenarios where tasks or threads both produce and consume items.  In this way, it will choose to consume its own work if available, and then steal if not.  However, in situations where you want fair consumption or ordering, or in situations where the producers and consumers are distinct processes, the bag is not optimal. The BlockingCollection is a great wrapper around all of the concurrent queue, stack, and bag that allows you to add producer and consumer semantics easily including waiting when the bin is full or empty. That’s the end of my dive into the concurrent collections.  I’d also strongly recommend, once again, you read this excellent Microsoft white paper that goes into much greater detail on the efficiencies you can gain using these collections judiciously (here). Tweet Technorati Tags: C#,.NET,Concurrent Collections,Little Wonders

    Read the article

  • The case of the phantom ADF developer (and other yarns)

    - by Chris Muir
    A few years of ADF experience means I see common mistakes made by different developers, some I regularly make myself.  This post is designed to assist beginners to Oracle JDeveloper Application Development Framework (ADF) avoid a common ADF pitfall, the case of the phantom ADF developer [add Scooby-Doo music here]. ADF Business Components - triggers, default table values and instead of views. Oracle's JDeveloper tutorials help with the A-B-Cs of ADF development, typically built on the nice 'n safe demo schema provided by with the Oracle database such as the HR demo schema. However it's not too long until ADF beginners, having built up some confidence from learning with the tutorials and vanilla demo schemas, start building ADF Business Components based upon their own existing database schema objects.  This is where unexpected problems can sneak in. The crime Developers may encounter a surprising error at runtime when editing a record they just created or updated and committed to the database, based on their own existing tables, namely the error: JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[x] ...where X is the primary key value of the row at hand.  In a production environment with multiple users this error may be legit, one of the other users has updated the row since you queried it.  Yet in a development environment this error is just plain confusing.  If developers are isolated in their own database, creating and editing records they know other users can't possibly be working with, or all the other developers have gone home for the day, how is this error possible? There are no other users?  It must be the phantom ADF developer! [insert dramatic music here] The following picture is what you'll see in the Business Component Browser, and you'll receive a similar error message via an ADF Faces page: A false conclusion What can possibly cause this issue if it isn't our phantom ADF developer?  Doesn't ADF BC implement record locking, locking database records when the row is modified in the ADF middle-tier by a user?  How can our phantom ADF developer even take out a lock if this is the case?  Maybe ADF has a bug, maybe ADF isn't implementing record locking at all?  Shouldn't we see the error "JBO-26030: Failed to lock the record, another user holds the lock" as we attempt to modify the record, why do we see JBO-25014? : Let's verify that ADF is in fact issuing the correct SQL LOCK-FOR-UPDATE statement to the database. First we need to verify ADF's locking strategy.  It is determined by the Application Module's jbo.locking.mode property.  The default (as of JDev 11.1.1.4.0 if memory serves me correct) and recommended value is optimistic, and the other valid value is pessimistic. Next we need a mechanism to check that ADF is issuing the LOCK statements to the database.  We could ask DBAs to monitor locks with OEM, but optimally we'd rather not involve overworked DBAs in this process, so instead we can use the ADF runtime setting –Djbo.debugoutput=console.  At runtime this options turns on instrumentation within the ADF BC layer, which among a lot of extra detail displayed in the log window, will show the actual SQL statement issued to the database, including the LOCK statement we're looking to confirm. Setting our locking mode to pessimistic, opening the Business Components Browser of a JSF page allowing us to edit a record, say the CHARGEABLE field within a BOOKINGS record where BOOKING_NO = 1206, upon editing the record see among others the following log entries: [421] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[422] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[423] Where binding param 1: 1206  As can be seen on line 422, in fact a LOCK-FOR-UPDATE is indeed issued to the database.  Later when we commit the record we see: [441] OracleSQLBuilder: SAVEPOINT 'BO_SP'[442] OracleSQLBuilder Executing, Lock 1 DML on: BOOKINGS (Update)[443] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[444] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[445] Update binding param 1: N[446] Where binding param 2: 1206[447] BookingsView1 notify COMMIT ... [448] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [449] EntityCache close prepared statement ....and as a result the changes are saved to the database, and the lock is released. Let's see what happens when we use the optimistic locking mode, this time to change the same BOOKINGS record CHARGEABLE column again.  As soon as we edit the record we see little activity in the logs, nothing to indicate any SQL statement, let alone a LOCK has been taken out on the row. However when we save our records by issuing a commit, the following is recorded in the logs: [509] OracleSQLBuilder: SAVEPOINT 'BO_SP'[510] OracleSQLBuilder Executing doEntitySelect on: BOOKINGS (true)[511] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[512] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[513] Where binding param 1: 1205[514] OracleSQLBuilder Executing, Lock 2 DML on: BOOKINGS (Update)[515] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[516] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[517] Update binding param 1: Y[518] Where binding param 2: 1205[519] BookingsView1 notify COMMIT ... [520] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [521] EntityCache close prepared statement Again even though we're seeing the midtier delay the LOCK statement until commit time, it is in fact occurring on line 412, and released as part of the commit issued on line 419.  Therefore with either optimistic or pessimistic locking a lock is indeed issued. Our conclusion at this point must be, unless there's the unlikely cause the LOCK statement is never really hitting the database, or the even less likely cause the database has a bug, then ADF does in fact take out a lock on the record before allowing the current user to update it.  So there's no way our phantom ADF developer could even modify the record if he tried without at least someone receiving a lock error. Hmm, we can only conclude the locking mode is a red herring and not the true cause of our problem.  Who is the phantom? At this point we'll need to conclude that the error message "JBO-25014: Another user has changed" is somehow legit, even though we don't understand yet what's causing it. This leads onto two further questions, how does ADF know another user has changed the row, and what's been changed anyway? To answer the first question, how does ADF know another user has changed the row, the Fusion Guide's section 4.10.11 How to Protect Against Losing Simultaneous Updated Data , that details the Entity Object Change-Indicator property, gives us the answer: At runtime the framework provides automatic "lost update" detection for entity objects to ensure that a user cannot unknowingly modify data that another user has updated and committed in the meantime. Typically, this check is performed by comparing the original values of each persistent entity attribute against the corresponding current column values in the database at the time the underlying row is locked. Before updating a row, the entity object verifies that the row to be updated is still consistent with the current state of the database.  The guide further suggests to make this solution more efficient: You can make the lost update detection more efficient by identifying any attributes of your entity whose values you know will be updated whenever the entity is modified. Typical candidates include a version number column or an updated date column in the row.....To detect whether the row has been modified since the user queried it in the most efficient way, select the Change Indicator option to compare only the change-indicator attribute values. We now know that ADF BC doesn't use the locking mechanism at all to protect the current user against updates, but rather it keeps a copy of the original record fetched, separate to the user changed version of the record, and it compares the original record against the one in the database when the lock is taken out.  If values don't match, be it the default compare-all-columns behaviour, or the more efficient Change Indicator mechanism, ADF BC will throw the JBO-25014 error. This leaves one last question.  Now we know the mechanism under which ADF identifies a changed row, what we don't know is what's changed and who changed it? The real culprit What's changed?  We know the record in the mid-tier has been changed by the user, however ADF doesn't use the changed record in the mid-tier to compare to the database record, but rather a copy of the original record before it was changed.  This leaves us to conclude the database record has changed, but how and by who? There are three potential causes: Database triggers The database trigger among other uses, can be configured to fire PLSQL code on a database table insert, update or delete.  In particular in an insert or update the trigger can override the value assigned to a particular column.  The trigger execution is actioned by the database on behalf of the user initiating the insert or update action. Why this causes the issue specific to our ADF use, is when we insert or update a record in the database via ADF, ADF keeps a copy of the record written to the database.  However the cached record is instantly out of date as the database triggers have modified the record that was actually written to the database.  Thus when we update the record we just inserted or updated for a second time to the database, ADF compares its original copy of the record to that in the database, and it detects the record has been changed – giving us JBO-25014. This is probably the most common cause of this problem. Default values A second reason this issue can occur is another database feature, default column values.  When creating a database table the schema designer can define default values for specific columns.  For example a CREATED_BY column could be set to SYSDATE, or a flag column to Y or N.  Default values are only used by the database when a user inserts a new record and the specific column is assigned NULL.  The database in this case will overwrite the column with the default value. As per the database trigger section, it then becomes apparent why ADF chokes on this feature, though it can only specifically occur in an insert-commit-update-commit scenario, not the update-commit-update-commit scenario. Instead of trigger views I must admit I haven't double checked this scenario but it seems plausible, that of the Oracle database's instead of trigger view (sometimes referred to as instead of views).  A view in the database is based on a query, and dependent on the queries complexity, may support insert, update and delete functionality to a limited degree.  In order to support fully insertable, updateable and deletable views, Oracle introduced the instead of view, that gives the view designer the ability to not only define the view query, but a set of programmatic PLSQL triggers where the developer can define their own logic for inserts, updates and deletes. While this provides the database programmer a very powerful feature, it can cause issues for our ADF application.  On inserting or updating a record in the instead of view, the record and it's data that goes in is not necessarily the data that comes out when ADF compares the records, as the view developer has the option to practically do anything with the incoming data, including throwing it away or pushing it to tables which aren't used by the view underlying query for fetching the data. Readers are at this point reminded that this article is specifically about how the JBO-25014 error occurs in the context of 1 developer on an isolated database.  The article is not considering how the error occurs in a production environment where there are multiple users who can cause this error in a legitimate fashion.  Assuming none of the above features are the cause of the problem, and optimistic locking is turned on (this error is not possible if pessimistic locking is the default mode *and* none of the previous causes are possible), JBO-25014 is quite feasible in a production ADF application if 2 users modify the same record. At this point under project timelines pressure, the obvious fix for developers is to drop both database triggers and default values from the underlying tables.  However we must be careful that these legacy constructs aren't used and assumed to be in place by other legacy systems.  Dropping the database triggers or default value that the existing Oracle Forms  applications assumes and requires to be in place could cause unexpected behaviour and bugs in the Forms application.  Proficient software engineers would recognize such a change may require a partial or full regression test of the existing legacy system, a potentially costly and timely exercise, not ideal. Solving the mystery once and for all Luckily ADF has built in functionality to deal with this issue, though it's not a surprise, as Oracle as the author of ADF also built the database, and are fully aware of the Oracle database's feature set.  At the Entity Object attribute level, the Refresh After Insert and Refresh After Update properties.  Simply selecting these instructs ADF BC after inserting or updating a record to the database, to expect the database to modify the said attributes, and read a copy of the changed attributes back into its cached mid-tier record.  Thus next time the developer modifies the current record, the comparison between the mid-tier record and the database record match, and JBO-25014: Another user has changed" is no longer an issue. [Post edit - as per the comment from Oracle's Steven Davelaar below, as he correctly points out the above solution will not work for instead-of-triggers views as it relies on SQL RETURNING clause which is incompatible with this type of view] Alternatively you can set the Change Indicator on one of the attributes.  This will work as long as the relating column for the attribute in the database itself isn't inadvertently updated.  In turn you're possibly just masking the issue rather than solving it, because if another developer turns the Change Indicator back on the original issue will return.

    Read the article

  • How to Run Low-Cost Minecraft on a Raspberry Pi for Block Building on the Cheap

    - by Jason Fitzpatrick
    We’ve shown you how to run your own blocktastic personal Minecraft server on a Windows/OSX box, but what if you crave something lighter weight, more energy efficient, and always ready for your friends? Read on as we turn a tiny Raspberry Pi machine into a low-cost Minecraft server you can leave on 24/7 for around a penny a day. Why Do I Want to Do This? There’s two aspects to this tutorial, running your own Minecraft server and specifically running that Minecraft server on a Raspberry Pi. Why would you want to run your own Minecraft server? It’s a really great way to extend and build upon the Minecraft play experience. You can leave the server running when you’re not playing so friends and family can join and continue building your world. You can mess around with game variables and introduce mods in a way that isn’t possible when you’re playing the stand-alone game. It also gives you the kind of control over your multiplayer experience that using public servers doesn’t, without incurring the cost of hosting a private server on a remote host. While running a Minecraft server on its own is appealing enough to a dedicated Minecraft fan, running it on the Raspberry Pi is even more appealing. The tiny little Pi uses so little resources that you can leave your Minecraft server running 24/7 for a couple bucks a year. Aside from the initial cost outlay of the Pi, an SD card, and a little bit of time setting it up, you’ll have an always-on Minecraft server at a monthly cost of around one gumball. What Do I Need? For this tutorial you’ll need a mix of hardware and software tools; aside from the actual Raspberry Pi and SD card, everything is free. 1 Raspberry Pi (preferably a 512MB model) 1 4GB+ SD card This tutorial assumes that you have already familiarized yourself with the Raspberry Pi and have installed a copy of the Debian-derivative Raspbian on the device. If you have not got your Pi up and running yet, don’t worry! Check out our guide, The HTG Guide to Getting Started with Raspberry Pi, to get up to speed. Optimizing Raspbian for the Minecraft Server Unlike other builds we’ve shared where you can layer multiple projects over one another (e.g. the Pi is more than powerful enough to serve as a weather/email indicator and a Google Cloud Print server at the same time) running a Minecraft server is a pretty intense operation for the little Pi and we’d strongly recommend dedicating the entire Pi to the process. Minecraft seems like a simple game, with all its blocky-ness and what not, but it’s actually a pretty complex game beneath the simple skin and required a lot of processing power. As such, we’re going to tweak the configuration file and other settings to optimize Rasbian for the job. The first thing you’ll need to do is dig into the Raspi-Config application to make a few minor changes. If you’re installing Raspbian fresh, wait for the last step (which is the Raspi-Config), if you already installed it, head to the terminal and type in “sudo raspi-config” to launch it again. One of the first and most important things we need to attend to is cranking up the overclock setting. We need all the power we can get to make our Minecraft experience enjoyable. In Raspi-Config, select option number 7 “Overclock”. Be prepared for some stern warnings about overclocking, but rest easy knowing that overclocking is directly supported by the Raspberry Pi foundation and has been included in the configuration options since late 2012. Once you’re in the actual selection screen, select “Turbo 1000MhHz”. Again, you’ll be warned that the degree of overclocking you’ve selected carries risks (specifically, potential corruption of the SD card, but no risk of actual hardware damage). Click OK and wait for the device to reset. Next, make sure you’re set to boot to the command prompt, not the desktop. Select number 3 “Enable Boot to Desktop/Scratch”  and make sure “Console Text console” is selected. Back at the Raspi-Config menu, select number 8 “Advanced Options’. There are two critical changes we need to make in here and one option change. First, the critical changes. Select A3 “Memory Split”: Change the amount of memory available to the GPU to 16MB (down from the default 64MB). Our Minecraft server is going to ruin in a GUI-less environment; there’s no reason to allocate any more than the bare minimum to the GPU. After selecting the GPU memory, you’ll be returned to the main menu. Select “Advanced Options” again and then select A4 “SSH”. Within the sub-menu, enable SSH. There is very little reason to keep this Pi connected to a monitor and keyboard, by enabling SSH we can remotely access the machine from anywhere on the network. Finally (and optionally) return again to the “Advanced Options” menu and select A2 “Hostname”. Here you can change your hostname from “raspberrypi” to a more fitting Minecraft name. We opted for the highly creative hostname “minecraft”, but feel free to spice it up a bit with whatever you feel like: creepertown, minecraft4life, or miner-box are all great minecraft server names. That’s it for the Raspbian configuration tab down to the bottom of the main screen and select “Finish” to reboot. After rebooting you can now SSH into your terminal, or continue working from the keyboard hooked up to your Pi (we strongly recommend switching over to SSH as it allows you to easily cut and paste the commands). If you’ve never used SSH before, check out how to use PuTTY with your Pi here. Installing Java on the Pi The Minecraft server runs on Java, so the first thing we need to do on our freshly configured Pi is install it. Log into your Pi via SSH and then, at the command prompt, enter the following command to make a directory for the installation: sudo mkdir /java/ Now we need to download the newest version of Java. At the time of this publication the newest release is the OCT 2013 update and the link/filename we use will reflect that. Please check for a more current version of the Linux ARMv6/7 Java release on the Java download page and update the link/filename accordingly when following our instructions. At the command prompt, enter the following command: sudo wget --no-check-certificate http://www.java.net/download/jdk8/archive/b111/binaries/jdk-8-ea-b111-linux-arm-vfp-hflt-09_oct_2013.tar.gz Once the download has finished successfully, enter the following command: sudo tar zxvf jdk-8-ea-b111-linux-arm-vfp-hflt-09_oct_2013.tar.gz -C /opt/ Fun fact: the /opt/ directory name scheme is a remnant of early Unix design wherein the /opt/ directory was for “optional” software installed after the main operating system; it was the /Program Files/ of the Unix world. After the file has finished extracting, enter: sudo /opt/jdk1.8.0/bin/java -version This command will return the version number of your new Java installation like so: java version "1.8.0-ea" Java(TM) SE Runtime Environment (build 1.8.0-ea-b111) Java HotSpot(TM) Client VM (build 25.0-b53, mixed mode) If you don’t see the above printout (or a variation thereof if you’re using a newer version of Java), try to extract the archive again. If you do see the readout, enter the following command to tidy up after yourself: sudo rm jdk-8-ea-b111-linux-arm-vfp-hflt-09_oct_2013.tar.gz At this point Java is installed and we’re ready to move onto installing our Minecraft server! Installing and Configuring the Minecraft Server Now that we have a foundation for our Minecraft server, it’s time to install the part that matter. We’ll be using SpigotMC a lightweight and stable Minecraft server build that works wonderfully on the Pi. First, grab a copy of the the code with the following command: sudo wget http://ci.md-5.net/job/Spigot/lastSuccessfulBuild/artifact/Spigot-Server/target/spigot.jar This link should remain stable over time, as it points directly to the most current stable release of Spigot, but if you have any issues you can always reference the SpigotMC download page here. After the download finishes successfully, enter the following command: sudo /opt/jdk1.8.0/bin/java -Xms256M -Xmx496M -jar /home/pi/spigot.jar nogui Note: if you’re running the command on a 256MB Pi change the 256 and 496 in the above command to 128 and 256, respectively. Your server will launch and a flurry of on-screen activity will follow. Be prepared to wait around 3-6 minutes or so for the process of setting up the server and generating the map to finish. Future startups will take much less time, around 20-30 seconds. Note: If at any point during the configuration or play process things get really weird (e.g. your new Minecraft server freaks out and starts spawning you in the Nether and killing you instantly), use the “stop” command at the command prompt to gracefully shutdown the server and let you restart and troubleshoot it. After the process has finished, head over to the computer you normally play Minecraft on, fire it up, and click on Multiplayer. You should see your server: If your world doesn’t popup immediately during the network scan, hit the Add button and manually enter the address of your Pi. Once you connect to the server, you’ll see the status change in the server status window: According to the server, we’re in game. According to the actual Minecraft app, we’re also in game but it’s the middle of the night in survival mode: Boo! Spawning in the dead of night, weaponless and without shelter is no way to start things. No worries though, we need to do some more configuration; no time to sit around and get shot at by skeletons. Besides, if you try and play it without some configuration tweaks first, you’ll likely find it quite unstable. We’re just here to confirm the server is up, running, and accepting incoming connections. Once we’ve confirmed the server is running and connectable (albeit not very playable yet), it’s time to shut down the server. Via the server console, enter the command “stop” to shut everything down. When you’re returned to the command prompt, enter the following command: sudo nano server.properties When the configuration file opens up, make the following changes (or just cut and paste our config file minus the first two lines with the name and date stamp): #Minecraft server properties #Thu Oct 17 22:53:51 UTC 2013 generator-settings= #Default is true, toggle to false allow-nether=false level-name=world enable-query=false allow-flight=false server-port=25565 level-type=DEFAULT enable-rcon=false force-gamemode=false level-seed= server-ip= max-build-height=256 spawn-npcs=true white-list=false spawn-animals=true texture-pack= snooper-enabled=true hardcore=false online-mode=true pvp=true difficulty=1 player-idle-timeout=0 gamemode=0 #Default 20; you only need to lower this if you're running #a public server and worried about loads. max-players=20 spawn-monsters=true #Default is 10, 3-5 ideal for Pi view-distance=5 generate-structures=true spawn-protection=16 motd=A Minecraft Server In the server status window, seen through your SSH connection to the pi, enter the following command to give yourself operator status on your Minecraft server (so that you can use more powerful commands in game, without always returning to the server status window). op [your minecraft nickname] At this point things are looking better but we still have a little tweaking to do before the server is really enjoyable. To that end, let’s install some plugins. The first plugin, and the one you should install above all others, is NoSpawnChunks. To install the plugin, first visit the NoSpawnChunks webpage and grab the download link for the most current version. As of this writing the current release is v0.3. Back at the command prompt (the command prompt of your Pi, not the server console–if your server is still active shut it down) enter the following commands: cd /home/pi/plugins sudo wget http://dev.bukkit.org/media/files/586/974/NoSpawnChunks.jar Next, visit the ClearLag plugin page, and grab the latest link (as of this tutorial, it’s v2.6.0). Enter the following at the command prompt: sudo wget http://dev.bukkit.org/media/files/743/213/Clearlag.jar Because the files aren’t compressed in a .ZIP or similar container, that’s all there is to it: the plugins are parked in the plugin directory. (Remember this for future plugin downloads, the file needs to be whateverplugin.jar, so if it’s compressed you need to uncompress it in the plugin directory.) Resart the server: sudo /opt/jdk1.8.0/bin/java -Xms256M -Xmx496M -jar /home/pi/spigot.jar nogui Be prepared for a slightly longer startup time (closer to the 3-6 minutes and much longer than the 30 seconds you just experienced) as the plugins affect the world map and need a minute to massage everything. After the spawn process finishes, type the following at the server console: plugins This lists all the plugins currently active on the server. You should see something like this: If the plugins aren’t loaded, you may need to stop and restart the server. After confirming your plugins are loaded, go ahead and join the game. You should notice significantly snappier play. In addition, you’ll get occasional messages from the plugins indicating they are active, as seen below: At this point Java is installed, the server is installed, and we’ve tweaked our settings for for the Pi.  It’s time to start building with friends!     

    Read the article

  • The Incremental Architect&acute;s Napkin - #2 - Balancing the forces

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/02/the-incremental-architectacutes-napkin---2---balancing-the-forces.aspxCategorizing requirements is the prerequisite for ecconomic architectural decisions. Not all requirements are created equal. However, to truely understand and describe the requirement forces pulling on software development, I think further examination of the requirements aspects is varranted. Aspects of Functionality There are two sides to Functionality requirements. It´s about what a software should do. I call that the Operations it implements. Operations are defined by expressions and control structures or calls to frameworks of some sort, i.e. (business) logic statements. Operations calculate, transform, aggregate, validate, send, receive, load, store etc. Operations are about behavior; they take input and produce output by considering state. I´m not using the term “function” here, because functions - or methods or sub-programs - are not necessary to implement Operations. Functions belong to a different sub-aspect of requirements (see below). Operations alone are not enough, though, to make a customer happy with regard to his/her Functionality requirements. Only correctly implemented Operations provide full value. This should make clear, why testing is so important. And not just manual tests during development of some operational feature, but automated tests. Because only automated tests scale when over time the number of operations increases. Without automated tests there is no guarantee formerly correct operations are still correct after more got added. To retest all previous operations manually is infeasible. So whoever relies just on manual tests is not really balancing the two forces Operations and Correctness. With manual tests more weight is put on the side of the scale of Operations. That might be ok for a short period of time - but in the long run it will bite you. You need to plan for Correctness in the long run from the first day of your project on. Aspects of Quality As important as Functionality is, it´s not the driver for software development. No software has ever been written to just implement some operation in code. We don´t need computers just to do something. All computers can do with software we can do without them. Well, at least given enough time and resources. We could calculate the most complex formulas without computers. We could do auctions with millions of people without computers. The only reason we want computers to help us with this and a million other Operations is… We don´t want to wait for the results very long. Or we want less errors. Or we want easier accessability to complicated solutions. So the main reason for customers to buy/order software is some Quality. They want some Functionality with a higher Quality (e.g. performance, scalability, usability, security…) than without the software. But Qualities come in at least two flavors: Most important are Primary Qualities. That´s the Qualities software truely is written for. Take an online auction website for example. Its Primary Qualities are performance, scalability, and usability, I´d say. Auctions should come within reach of millions of people; setting up an auction should be very easy; finding a suitable auction and bidding on it should be as fast as possible. Only if those Qualities have been implemented does security become relevant. A secure auction website is important - but not as important as a fast auction website. Nobody would want to use the most secure auction website if it was unbearably slow. But there would be people willing to use the fastest auction website even it was lacking security. That´s why security - with regard to online auction software - is not a Primary Quality, but just a Secondary Quality. It´s a supporting quality, so to speak. It does not deliver value by itself. With a password manager software this might be different. There security might be a Primary Quality. Please get me right: I don´t want to denigrate any Quality. There´s a long list of non-functional requirements at Wikipedia. They are all created equal - but that does not mean they are equally important for all software projects. When confronted with Quality requirements check with the customer which are primary and which are secondary. That will help to make good economical decisions when in a crunch. Resources are always limited - but requirements are a bottomless ocean. Aspects of Security of Investment Functionality and Quality are traditionally the requirement aspects cared for most - by customers and developers alike. Even today, when pressure rises in a project, tunnel vision will focus on them. Any measures to create and hold up Security of Investment (SoI) will be out of the window pretty quickly. Resistance to customers and/or management is futile. As long as SoI is not placed on equal footing with Functionality and Quality it´s bound to suffer under pressure. To look closer at what SoI means will help to become more conscious about it and make customers and management aware of the risks of neglecting it. SoI to me has two facets: Production Efficiency (PE) is about speed of delivering value. Customers like short response times. Short response times mean less money spent. So whatever makes software development faster supports this requirement. This must not lead to duct tape programming and banging out features by the dozen, though. Because customers don´t just want Operations and Quality, but also Correctness. So if Correctness gets compromised by focussing too much on Production Efficiency it will fire back. Customers want PE not just today, but over the whole course of a software´s lifecycle. That means, it´s not just about coding speed, but equally about code quality. If code quality leads to rework the PE is on an unsatisfactory level. Also if code production leads to waste it´s unsatisfactory. Because the effort which went into waste could have been used to produce value. Rework and waste cost money. Rework and waste abound, however, as long as PE is not addressed explicitly with management and customers. Thanks to the Agile and Lean movements that´s increasingly the case. Nevertheless more could and should be done in many teams. Each and every developer should keep in mind that Production Efficiency is as important to the customer as Functionality and Quality - whether he/she states it or not. Making software development more efficient is important - but still sooner or later even agile projects are going to hit a glas ceiling. At least as long as they neglect the second SoI facet: Evolvability. Delivering correct high quality functionality in short cycles today is good. But not just any software structure will allow this to happen for an indefinite amount of time.[1] The less explicitly software was designed the sooner it´s going to get stuck. Big ball of mud, monolith, brownfield, legacy code, technical debt… there are many names for software structures that have lost the ability to evolve, to be easily changed to accomodate new requirements. An evolvable code base is the opposite of a brownfield. It´s code which can be easily understood (by developers with sufficient domain expertise) and then easily changed to accomodate new requirements. Ideally the costs of adding feature X to an evolvable code base is independent of when it is requested - or at least the costs should only increase linearly, not exponentially.[2] Clean Code, Agile Architecture, and even traditional Software Engineering are concerned with Evolvability. However, it seems no systematic way of achieving it has been layed out yet. TDD + SOLID help - but still… When I look at the design ability reality in teams I see much room for improvement. As stated previously, SoI - or to be more precise: Evolvability - can hardly be measured. Plus the customer rarely states an explicit expectation with regard to it. That´s why I think, special care must be taken to not neglect it. Postponing it to some large refactorings should not be an option. Rather Evolvability needs to be a core concern for every single developer day. This should not mean Evolvability is more important than any of the other requirement aspects. But neither is it less important. That´s why more effort needs to be invested into it, to bring it on par with the other aspects, which usually are much more in focus. In closing As you see, requirements are of quite different kinds. To not take that into account will make it harder to understand the customer, and to make economic decisions. Those sub-aspects of requirements are forces pulling in different directions. To improve performance might have an impact on Evolvability. To increase Production Efficiency might have an impact on security etc. No requirement aspect should go unchecked when deciding how to allocate resources. Balancing should be explicit. And it should be possible to trace back each decision to a requirement. Why is there a null-check on parameters at the start of the method? Why are there 5000 LOC in this method? Why are there interfaces on those classes? Why is this functionality running on the threadpool? Why is this function defined on that class? Why is this class depending on three other classes? These and a thousand more questions are not to mean anything should be different in a code base. But it´s important to know the reason behind all of these decisions. Because not knowing the reason possibly means waste and having decided suboptimally. And how do we ensure to balance all requirement aspects? That needs practices and transparency. Practices means doing things a certain way and not another, even though that might be possible. We´re dealing with dangerous tools here. Like a knife is a dangerous tool. Harm can be done if we use our tools in just any way at the whim of the moment. Over the centuries rules and practices have been established how to use knifes. You don´t put them in peoples´ legs just because you´re feeling like it. You hand over a knife with the handle towards the receiver. You might not even be allowed to cut round food like potatos or eggs with it. The same should be the case for dangerous tools like object-orientation, remote communication, threads etc. We need practices to use them in a way so requirements are balanced almost automatically. In addition, to be able to work on software as a team we need transparency. We need means to share our thoughts, to work jointly on mental models. So far our tools are focused on working with code. Testing frameworks, build servers, DI containers, intellisense, refactoring support… That´s all nice and well. I don´t want to miss any of that. But I think it´s not enough. We´re missing mental tools, tools for making thinking and talking about software (independently of code) easier. You might think, enough of such tools already exist like all those UML diagram types or Flow Charts. But then, isn´t it strange, hardly any team is using them to design software? Or is that just due to a lack of education? I don´t think so. It´s a matter value/weight ratio: the current mental tools are too heavy weight compared to the value they deliver. So my conclusion is, we need lightweight tools to really be able to balance requirements. Software development is complex. We need guidance not to forget important aspects. That´s like with flying an airplane. Pilots don´t just jump in and take off for their destination. Yes, there are times when they are “flying by the seats of their pants”, when they are just experts doing thing intuitively. But most of the time they are going through honed practices called checklist. See “The Checklist Manifesto” for very enlightening details on this. Maybe then I should say it like this: We need more checklists for the complex businss of software development.[3] But that´s what software development mostly is about: changing software over an unknown period of time. It needs to be corrected in order to finally provide promised operations. It needs to be enhanced to provide ever more operations and qualities. All this without knowing when it´s going to stop. Probably never - until “maintainability” hits a wall when the technical debt is too large, the brownfield too deep. Software development is not a sprint, is not a marathon, not even an ultra marathon. Because to all this there is a foreseeable end. Software development is like continuously and foreever running… ? And sometimes I dare to think that costs could even decrease over time. Think of it: With each feature a software becomes richer in functionality. So with each additional feature the chance of there being already functionality helping its implementation increases. That should lead to less costs of feature X if it´s requested later than sooner. X requested later could stand on the shoulders of previous features. Alas, reality seems to be far from this despite 20+ years of admonishing developers to think in terms of reusability.[1] ? Please don´t get me wrong: I don´t want to bog down the “art” of software development with heavyweight practices and heaps of rules to follow. The framework we need should be lightweight. It should not stand in the way of delivering value to the customer. It´s purpose is even to make that easier by helping us to focus and decreasing waste and rework. ?

    Read the article

  • CodePlex Daily Summary for Tuesday, June 21, 2011

    CodePlex Daily Summary for Tuesday, June 21, 2011Popular ReleasesESRI ArcGIS Silverlight Toolkit: June 2011 - v2.2: ESRI ArcGIS Silverlight Toolkit v2.2 New controls added: Attribution Control ScaleLine Control GpsLayer (WinPhone only)Terraria World Viewer: Version 1.4: Update June 21st World file will be stored in memory to minimize chances of Terraria writing to it while we read it. Different set of APIs allow the program to draw the world much quicker. Loading world information (world variables, chest list) won't cause the GUI to freeze at all anymore. Re-introduced the "Filter chests" checkbox: Allow disabling of chest filter/finder so all chest symbos are always drawn. First-time users will have a default world path suggested to them: C:\Users\U...AcDown????? - Anime&Comic Downloader: AcDown????? v3.0 Beta7: ??AcDown???????????????,?????????????????????。????????????????????,??Acfun、Bilibili、???、???、?????,???????????、???????。 AcDown???????????????????????????,???,???????????????????。 AcDown???????C#??,?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ??v3.0 Beta7 ????????????? ???? ?? ????????????????? "??????"?????"?...BlogEngine.NET: BlogEngine.NET 2.5 RC: BlogEngine.NET Hosting - Click Here! 3 Months FREE – BlogEngine.NET Hosting – Click Here! This is a Release Candidate version for BlogEngine.NET 2.5. The most current, stable version of BlogEngine.NET is version 2.0. Find out more about the BlogEngine.NET 2.5 RC here. If you want to extend or modify BlogEngine.NET, you should download the source code. To get started, be sure to check out our installation documentation. If you are upgrading from a previous version, please take a look at ...Microsoft All-In-One Code Framework - a centralized code sample library: All-In-One Code Framework 2011-06-19: Alternatively, you can install Sample Browser or Sample Browser VS extension, and download the code samples from Sample Browser. Improved and Newly Added Examples:For an up-to-date code sample index, please refer to All-In-One Code Framework Sample Catalog. NEW Samples for Windows Azure Sample Description Owner CSAzureStartupTask The sample demonstrates using the startup tasks to install the prerequisites or to modify configuration settings for your environment in Windows Azure Rafe Wu ...Facebook C# SDK: 5.0.40: This is a RTW release which adds new features to v5.0.26 RTW. Support for multiple FacebookMediaObjects in one request. Allow FacebookMediaObjects in batch requests. Removes support for Cassini WebServer (visual studio inbuilt web server). Better support for unit testing and mocking. updated SimpleJson to v0.6 Refer to CHANGES.txt for details. For more information about this release see the following blog posts: Facebook C# SDK - Multiple file uploads in Batch Requests Faceb...NLog - Advanced .NET Logging: NLog 2.0 Release Candidate: Release notes for NLog 2.0 RC can be found at http://nlog-project.org/nlog-2-rc-release-notesPowerGUI Visual Studio Extension: PowerGUI VSX 1.3.5: Changes - VS SDK no longer required to be installed (a bug in v. 1.3.4).Gendering Add-In for Microsoft Office Word 2010: Gendering Add-In: This is the first stable Version of the Gendering Add-In. Unzip the package and start "setup.exe". The .reg file shows how to config an alternate path for suggestion table.Intelligent Enterprise Solution: Document for this project: Document for this projectTerrariViewer: TerrariViewer v3.1 [Terraria Inventory Editor]: This version adds tool tips. Almost every picture box you mouse over will tell you what item is in that box. I have also cleaned up the GUI a little more to make things easier on my end. There are various bug fixes including ones associated with opening different characters in the same instance of the program. As always, please bring any bugs you find to my attention.Kinect Paint: KinectPaint V1.0: This is version 1.0 of Kinect Paint. To run it, follow the steps: Install the Kinect SDK for Windows (available at http://research.microsoft.com/en-us/um/redmond/projects/kinectsdk/download.aspx) Connect your Kinect device to the computer and to the power. Download the Zip file. Unblock the Zip file by right clicking on it, and pressing the Unblock button in the file properties (if available). Extract the content of the Zip file. Run KinectPaint.exe.CommonLibrary.NET: CommonLibrary.NET - 0.9.7 Beta: A collection of very reusable code and components in C# 3.5 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. Samples in <root>\src\Lib\CommonLibrary.NET\Samples CommonLibrary.NET 0.9.7Documentation 6738 6503 New 6535 Enhancements 6583 6737DropBox Linker: DropBox Linker 1.2: Public sub-folders are now monitored for changes as well (thanks to mcm69) Automatic public sync folder detection (thanks to mcm69) Non-Latin and special characters encoded correctly in URLs Pop-ups are now slot-based (use first free slot and will never be overlapped — test it while previewing timeout) Public sync folder setting is hidden when auto-detected Timeout interval is displayed in popup previews A lot of major and minor code refactoring performed .NET Framework 4.0 Client...MVC Controls Toolkit: Mvc Controls Toolkit 1.1.5 RC: Added Extended Dropdown allows a prompt item to be inserted as first element. RequiredAttribute, if present, trggers if no element is chosen Client side javascript function to set/get the values of DateTimeInput, TypedTextBox, TypedEditDisplay, and to bind/unbind a "change" handler The selected page in the pager is applied the attribute selected-page="selected" that can be used in the definition of CSS rules to style the selected page items controls now interpret a null value as an empr...Umbraco CMS: Umbraco CMS 5.0 CTP 1: Umbraco 5 Community Technology Preview Umbraco 5 will be the next version of everyone's favourite, friendly ASP.NET CMS that already powers over 100,000 websites worldwide. Try out our first CTP of version 5 today! If you're new to Umbraco and would like to get a quick low-down on our popular and easy-to-learn approach to content management, check out our intro video here. What's in the v5 CTP box? This is a preview version of version 5 and includes support for the following familiar Umbr...LevelZap: 1.0: Initial version. Zap away!Ribbon Browser for Microsoft Dynamics CRM 2011: Ribbon Browser (1.0.514.30): Initial releaseCoding4Fun Kinect Toolkit: Coding4Fun.Kinect Toolkit: Version 1.0Kinect Mouse Cursor: Kinect Mouse Cursor v1.0: The initial release of the Kinect Mouse Cursor project!New ProjectsBaffoHat Kinect: BaffHat is a game for fun with friends and drink a little.Bango Windows Phone 7 Application Analytics SDK: Bango application analytics is an analytics solution for mobile applications. This SDK provides a framework you can use in your application to add analytics capabilities to your mobile applications. It's developed in C#.NET (4.0) and targets the Windows Phone 7 operating system.C++ Winsock WebSocket server: A websockets server built in C++ using the C APIs Winsock and <windows.h>. Works with the current version of Chrome (13.0.782.24).Core.Cpp: CORE-CPPDataContractJson ValueProviderFactory: ValueProviderFactory for ASP.NET MVC that uses the DataContractJsonSerializer for JSON Serialization. This comes in handy when porting RESTful JSON Services from WCF to ASP.NET MVC.E4D CRM 2011 Ribbon Utility: E4D CRM 2011 Ribbon Utility helps you speed Dynamics CRM 2011 Ribbon customization. Ribbon customization tasks can be exhausting, as it require many iterations, mouse clicks and input. This utility does the heavy lifting for you: it will zip, upload and publish automatically!EASY: Eve Application Service for YoueGlass: eGlassGrove SMTP Mailer: Grove SMTP Mailer is a Simple and Open Source SMTP E-Mail Sender Written in Visual Basic by Hommerhart - Effect-7 Grove SMTP Mailer on SourceForge : https://sourceforge.net/projects/grovesmtpmailer/hoox: PHP/MySQL CMS focused on speed and simplicity over feature-diversity.Intelligent Enterprise Solution: An ERP source code,including sample,demo,tool.documentKinect Touch Device: A simple "WPF4 Touch Device" using Kinect with OpenNI & NITE (written in C#). This project makes it easy to transform your WPF4 touch application in "touch less" with the Kinect with little change : replace "Window" base class by "KinectWindow". Currently the Touch Down and Touch Up is determined by the distance of the hand from the Kinect. A possible change would be to detect if the hand is open or closed to enable the Touch Down or Touch Up.kinectPainter: kinect paint projectLiteQuery: Lite Query is an implementation of the Query Object Pattern that will help you to build the queries in the Frontend layer in a way that will be independent from the ORM used in Data Access. The object is translated in the language of the ORM framework using Query Translators. This version comes with translators for Entity Framework and NHibernate.octoInstall: octoInstall is a fast and easy to operate installer and updater utility. For updating it uses a binary compare techology, that creates very small update packages to patch software from one version to another and saves up to 99% of update size.P7T_Engine: P7T_Engine makes it easier for developers to develop advanced 2D game and also basic 3D games. You'll no longer have to write your own engine for 2D games or write large chunks of code to make your project happen. The entire engine is developed in C++. LUA scripting ability is something that we are looking forward to.Panning Tile Control for Windows Phone 7: The panning tile control mimics the functionality of the Windows Phone 7 music tile: Photos and text slowly pan, scroll and fade. It can be used inside of any WP7 Silverlight app.PerstDemo: This project is to show the large amount of data, which is in the format of XML file on Window Phone 7 using Perst Database for this we convert XML to Perst. Due to large data, it consumes a lot of time in conversion & also if fire queries on the database. So, what can be done to lessen the time consumed. Please, download the project http://perstdemo.codeplex.com/releases/view/68640 and have a look. Waiting for the response, ideas, suggestions....SimplePaxos: Implement a simple paxos protocolSOL Polar Converter: Create optimized polars for Bluewater Racing and Expedition from Sailonline races or text polar dataStructured Web Data Extraction: The dataset used in SIGIR 2011 paperSurveyTemplate: This is just for practising the use of culteInfo class in c#Task Unlocker: A site level feature that provides UI for unclocking tasks locked by workflow upgrade. Why tasks get locked : Explanation of cause http://blogs.code-counsel.net/Wouter/Lists/Posts/Post.aspx?ID=118 Inspiration for this feature : Workaround code http://geek.hubkey.com/2007/09/locked-workflow.html Test Project kobi: just for testWindows Azure CDN Helpers: This is a project that helps you quickly utilize the Windows Azure CDN from your ASP.NET MVC website. These helpers will work on sites hosted on and off Windows Azure.WPF Breakout: Remake of the classic Breakout game using WPF. WPFRadio: a WPF Radio Web Player ...

    Read the article

  • CodePlex Daily Summary for Tuesday, November 06, 2012

    CodePlex Daily Summary for Tuesday, November 06, 2012Popular ReleasesMetodología General Ajustada - MGA: 03.04.01: Cambios Parmenio: Se desarrollan las opciones de los botones Siguiente y Anterior en todos los formularios de Identificación y Preparación. Cambios John: Integración de código con cambios enviados por Parmenio Bonilla. Generación de instaladores. Soporte técnico por correo electrónico, telefónico y en sitio.DictationTool: DictationCool-WPF: • Open a media file to start a new dication. • Open a dct file to continue a dictation. • Compare your dictation with original text if exists. • Save your dictation to dct file, and restore it to continue later. • Save the compared result to html file.SSIS Expression Editor & Tester: Expression Editor and Tester v1.0.8.0: Getting Started Download and extract the files, no install required. The ExpressionEditor.zip download contains a folder for each SQL Server version. ExpressionEditor2005 ExpressionEditor2008 ExpressionEditor2012 Changes Fixed issues 32868 and 33291 raised by BIDS Helper users. No functional changes from previous release. Versions There are three versions included, all built from the same code with the same functionality, but each targeting a different release of SQL Server. The downlo...LINQ to Twitter: LINQ to Twitter Beta v2.1.2: Now supports Windows Phone 8. Supports .NET 3.5, .NET 4.0, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, and Windows 8. 100% Twitter API coverage. Now supports Twitter API v1.1! LINQ to Twitter Samples contains example code for using LINQ to Twitter with various .NET technologies. Downloadable source code also has C# samples in the LinqToTwitterDemo project and VB samples in the LinqToTwitterDemoVB project. Also on NuGet.Resume Search: Version 1.0.0.3: - updates to messages - slight change to UIMCEBuddy 2.x: MCEBuddy 2.3.7: Changelog for 2.3.7 (32bit and 64bit) 1. Improved performance of MP4 Fast and M4V Fast Profiles (no deinterlacing, removed --decomb) 2. Improved priority handling 3. Added support for Pausing and Resume conversions 4. Added support for fallback to source directory if network destination directory is unavailable 5. MCEBuddy now installs ShowAnalyzer during installation 6. Added support for long description atom in iTunesKuick Application & ORM Framework: Version 1.0.15322.29505 - Released 2012-11-05: Fix bugs and add new features.FoxyXLS: FoxyXLS Releases: Source code and samplesMySQL Tuner for Windows: 0.2: Welcome to the second beta of MySQL Tuner for Windows! This release fixes a critical bug in 0.1 where it would not work with recent version of MySQL server. Be warned that there will be bugs in this release, so please do not use on production or critical systems. Do post details of issues found to the issue tracker, and I will endeavour to fix them, when I can. I would love to have your feedback, and if possible your support! Requirements Microsoft .NET Framework 4.0 (http://www.microsoft....Dyanamic Reports (RDLC) - SharePoint 2010 Visual WebPart: Initial Release: This is a Initial Release.HTML Renderer: HTML Renderer 1.0.0.0 (3): Major performance improvement (http://theartofdev.wordpress.com/2012/10/25/how-i-optimized-html-renderer-and-fell-in-love-with-vs-profiler/) Minor fixes raised in issue tracker and discussions.ProDinner - ASP.NET MVC Sample (EF4.4, N-Tier, jQuery): 8: update to ASP.net MVC Awesome 3.0 udpate to EntityFramework 4.4 update to MVC 4 added dinners grid on homepageASP.net MVC Awesome - jQuery Ajax Helpers: 3.0: added Grid helper added XML Documentation added textbox helper added Client Side API for AjaxList removed .SearchButton from AjaxList AjaxForm and Confirm helpers have been merged into the Form helper optimized html output for AjaxDropdown, AjaxList, Autocomplete works on MVC 3 and 4BlogEngine.NET: BlogEngine.NET 2.7: Cheap ASP.NET Hosting - $4.95/Month - Click Here!! Click Here for More Info Cheap ASP.NET Hosting - $4.95/Month - Click Here! If you want to set up and start using BlogEngine.NET right away, you should download the Web project. If you want to extend or modify BlogEngine.NET, you should download the source code. If you are upgrading from a previous version of BlogEngine.NET, please take a look at the Upgrading to BlogEngine.NET 2.7 instructions. If you looking for Web Application Project, ...Launchbar: Launchbar 4.2.2.0: This release is the first step in cleaning up the code and using all the latest features of .NET 4.5 Changes 4.2.2 (2012-11-02) Improved handling of left clicks 4.1.0 (2012-10-17) Removed tray icon Assembly renamed and signed with strong name Note When you upgrade, Launchbar will start with the default settings. You can import your previous settings by following these steps: Run Launchbar and just save the settings without configuring anything Shutdown Launchbar Go to the folder %LOCA...Mouse Jiggler: MouseJiggle-1.3: This adds the much-requested minimize-to-tray feature to Mouse Jiggler.Umbraco CMS: Umbraco 4.10.0 Release Candidate: This is a Release Candidate, which means that if we do not find any major issues in the next week, we will release this version as the final release of 4.10.0 on November 9th, 2012. The documentation for the MVC bits still lives in the Github version of the docs for now and will be updated on our.umbraco.org with the final release of 4.10.0. Browse the documentation here: https://github.com/umbraco/Umbraco4Docs/tree/4.8.0/Documentation/Reference/Mvc If you want to do only MVC then make sur...Skype Auto Recorder: SkypeAutoRecorder 1.3.4: New icon and images. Reworked settings window. Implemented high-quality sound encoding. Implemented a possibility to produce stereo records. Added buttons with system-wide hot keys for manual starting and canceling of recording. Added buttons for opening folder with records. Added Help button. Fixed an issue when recording is continuing after call end. Fixed an issue when recording doesn't start. Fixed several bugs and improved stability. Major refactoring and optimization...Python Tools for Visual Studio: Python Tools for Visual Studio 1.5: We’re pleased to announce the release of Python Tools for Visual Studio 1.5 RTM. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, etc. support. For a quick overview of the general IDE experience, please watch this video There are a number of exciting improvement in this release comp...AssaultCube Reloaded: 2.5.5: Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we try to package for those OSes. Try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compile your own for Linux (source included) Changelog: Fixed potential bot bugs: Map change, OpenAL...New Projects1105p1: just a testAssaultCube Reloaded Launcher: This is the official ACR Launcher For AssaultCube Reloaded. Betting Manager: This project allows the users to store and analyze all their betting activities.BTalk: Simple communicator with encryption capability.Chronozoom Extensions: This project adds a touch interface to the original chronozoom source code, as well as adding an intuitive way to integrate new data into chronozoom easily.CloudClipX: CloudClipx is a simple clipboard monitor that optionally connects to a cloud service where you can upload your most recent clips and retrieve them from your mobcodeplexproject01: The project is okcodeplexproject02: Jean well done summaryDelayed Reaction: Delayed Reaction is a mini library designed to help with application event distribution. Events can fire from any method in your code in any form or window and be distributed to any window that registered for the event. Events can also be fired with a delay or become sticky. Dyanamic Reports (RDLC) - SharePoint 2010 Visual WebPart: SharePoint 2010 Visual Web Part solution to create RDLC reports on fly, based on lists/libraries available in SharePoint 2010 Portal.Eight OData: Consuming an OData feedFreelance: Freelance projectgraphdrawing: graph drawingGrasshoppers Time Board: Application for measure time during floorball match. Can be used also for socker or other game. HCyber: A Cyberminer Search Engine using the KWIC system.lcwineight: lcs win eight projectMeta Protector: MetaProtector is a DotNetNuke module intended to help sites apply additional Meta tags to their markup. MetroEEG: Mindwave for Windows Phone: Windows Phone 8 API for Minewave Mobile EEG Headset. minipp: This is a test project.mleai: A test project for ml.MVC Multi Layer Best Practices App: MVC Multi Layer Best Practices AppNon-Unique Key Dictionary: A dictionary that does not require unique keys for values. nTribeHR: .NET wrapper for TribeHR web services.Object Serialiser: Object Instance to Object Initialiser Syntax serialiserOrchard Metro JS library: Metro JS is a JavaScript plugin for jQuery developed to easily enable Metro interfaces on the web.Pasty Cloud Clipboard Client: A windows forms application for managing your Pasty cloud clipboard. Also includes a separate API wrapper that can be used independently.PowerComboBox: PowerComboBox is an enhanced ComboBox control. Programmed in Visual Basic, PowerComboBox adds 4 new features, Hovering; Backcolour; Checkboxes; Groups.project4: my summaryProjet POO IMA: Projet POO IMAQuickShot: Quickshot is a desktop screen capture utility that allows you to upload your captures directly to the popular image hosting website, imgur. RePro - NFe: Aplicativo de integração com ERP’s para emissão e distribuição de NFe.Resume Search: Search, filter and parse user resumes in PDF, .DOC, .DOCX format into properly organized database format.Script-base POP3 Connector for Exchange: This is a script based POP3 connector for Exchange. It works with all Exchange having "Pickup folder"Squadron for SharePoint 2010: Squadron is a: - Collection of SharePoint 2010 utility modules - Created with Windows Forms application - Plugin enabled modules - Free SRS Subscription Manager: This utility allows you to re-execute subscriptions in Microsoft SQL Server Reporting ServicesStart Screen Button: Windows 8 / Server 2012 taskbar button to bring up the Start Screen. Useful for RDP, VNC, LogMeIn, etc.TableSizer: A Windows Live Writer plugin that automatically sets the width property of every table to a configurable value prior to publishing the post.TodayHumor for Windows: Windows Phone? ?? ??? ?? ?????????. Towers of Hanoi 3D: A simple Towers of Hanoi 3D game for the Windows Store. Developed with the Visual Studio 3D Starter Kit - http://aka.ms/vs3dkitTweet 4 ME: Tweet 4 ME is a Java Micro Edition (MIDP 2.0 CLDC 1.0) based Twitter client built with a custom GUI framework. The application is part of a college project.UWE Bristol - Robotic Pick and Place System 2012: A robot arm pick and place will be trained to repeatedly move an item from one location to another. The training and control input will be from a 4x4 keypad.

    Read the article

  • Windows Azure: Major Updates for Mobile Backend Development

    - by ScottGu
    This week we released some great updates to Windows Azure that make it significantly easier to develop mobile applications that use the cloud. These new capabilities include: Mobile Services: Custom API support Mobile Services: Git Source Control support Mobile Services: Node.js NPM Module support Mobile Services: A .NET API via NuGet Mobile Services and Web Sites: Free 20MB SQL Database Option for Mobile Services and Web Sites Mobile Notification Hubs: Android Broadcast Push Notification Support All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Mobile Services: Custom APIs, Git Source Control, and NuGet Windows Azure Mobile Services provides the ability to easily stand up a mobile backend that can be used to support your Windows 8, Windows Phone, iOS, Android and HTML5 client applications.  Starting with the first preview we supported the ability to easily extend your data backend logic with server side scripting that executes as part of client-side CRUD operations against your cloud back data tables. With today’s update we are extending this support even further and introducing the ability for you to also create and expose Custom APIs from your Mobile Service backend, and easily publish them to your Mobile clients without having to associate them with a data table. This capability enables a whole set of new scenarios – including the ability to work with data sources other than SQL Databases (for example: Table Services or MongoDB), broker calls to 3rd party APIs, integrate with Windows Azure Queues or Service Bus, work with custom non-JSON payloads (e.g. Windows Periodic Notifications), route client requests to services back on-premises (e.g. with the new Windows Azure BizTalk Services), or simply implement functionality that doesn’t correspond to a database operation.  The custom APIs can be written in server-side JavaScript (using Node.js) and can use Node’s NPM packages.  We will also be adding support for custom APIs written using .NET in the future as well. Creating a Custom API Adding a custom API to an existing Mobile Service is super easy.  Using the Windows Azure Management Portal you can now simply click the new “API” tab with your Mobile Service, and then click the “Create a Custom API” button to create a new Custom API within it: Give the API whatever name you want to expose, and then choose the security permissions you’d like to apply to the HTTP methods you expose within it.  You can easily lock down the HTTP verbs to your Custom API to be available to anyone, only those who have a valid application key, only authenticated users, or administrators.  Mobile Services will then enforce these permissions without you having to write any code: When you click the ok button you’ll see the new API show up in the API list.  Selecting it will enable you to edit the default script that contains some placeholder functionality: Today’s release enables Custom APIs to be written using Node.js (we will support writing Custom APIs in .NET as well in a future release), and the Custom API programming model follows the Node.js convention for modules, which is to export functions to handle HTTP requests. The default script above exposes functionality for an HTTP POST request. To support a GET, simply change the export statement accordingly.  Below is an example of some code for reading and returning data from Windows Azure Table Storage using the Azure Node API: After saving the changes, you can now call this API from any Mobile Service client application (including Windows 8, Windows Phone, iOS, Android or HTML5 with CORS). Below is the code for how you could invoke the API asynchronously from a Windows Store application using .NET and the new InvokeApiAsync method, and data-bind the results to control within your XAML:     private async void RefreshTodoItems() {         var results = await App.MobileService.InvokeApiAsync<List<TodoItem>>("todos", HttpMethod.Get, parameters: null);         ListItems.ItemsSource = new ObservableCollection<TodoItem>(results);     }    Integrating authentication and authorization with Custom APIs is really easy with Mobile Services. Just like with data requests, custom API requests enjoy the same built-in authentication and authorization support of Mobile Services (including integration with Microsoft ID, Google, Facebook and Twitter authentication providers), and it also enables you to easily integrate your Custom API code with other Mobile Service capabilities like push notifications, logging, SQL, etc. Check out our new tutorials to learn more about to use new Custom API support, and starting adding them to your app today. Mobile Services: Git Source Control Support Today’s Mobile Services update also enables source control integration with Git.  The new source control support provides a Git repository as part your Mobile Service, and it includes all of your existing Mobile Service scripts and permissions. You can clone that git repository on your local machine, make changes to any of your scripts, and then easily deploy the mobile service to production using Git. This enables a really great developer workflow that works on any developer machine (Windows, Mac and Linux). To use the new support, navigate to the dashboard for your mobile service and select the Set up source control link: If this is your first time enabling Git within Windows Azure, you will be prompted to enter the credentials you want to use to access the repository: Once you configure this, you can switch to the configure tab of your Mobile Service and you will see a Git URL you can use to use your repository: You can use this URL to clone the repository locally from your favorite command line: > git clone https://scottgutodo.scm.azure-mobile.net/ScottGuToDo.git Below is the directory structure of the repository: As you can see, the repository contains a service folder with several subfolders. Custom API scripts and associated permissions appear under the api folder as .js and .json files respectively (the .json files persist a JSON representation of the security settings for your endpoints). Similarly, table scripts and table permissions appear as .js and .json files, but since table scripts are separate per CRUD operation, they follow the naming convention of <tablename>.<operationname>.js. Finally, scheduled job scripts appear in the scheduler folder, and the shared folder is provided as a convenient location for you to store code shared by multiple scripts and a few miscellaneous things such as the APNS feedback script. Lets modify the table script todos.js file so that we have slightly better error handling when an exception occurs when we query our Table service: todos.js tableService.queryEntities(query, function(error, todoItems){     if (error) {         console.error("Error querying table: " + error);         response.send(500);     } else {         response.send(200, todoItems);     }        }); Save these changes, and now back in the command line prompt commit the changes and push them to the Mobile Services: > git add . > git commit –m "better error handling in todos.js" > git push Once deployment of the changes is complete, they will take effect immediately, and you will also see the changes be reflected in the portal: With the new Source Control feature, we’re making it really easy for you to edit your mobile service locally and push changes in an atomic fashion without sacrificing ease of use in the Windows Azure Portal. Mobile Services: NPM Module Support The new Mobile Services source control support also allows you to add any Node.js module you need in the scripts beyond the fixed set provided by Mobile Services. For example, you can easily switch to use Mongo instead of Windows Azure table in our example above. Set up Mongo DB by either purchasing a MongoLab subscription (which provides MongoDB as a Service) via the Windows Azure Store or set it up yourself on a Virtual Machine (either Windows or Linux). Then go the service folder of your local git repository and run the following command: > npm install mongoose This will add the Mongoose module to your Mobile Service scripts.  After that you can use and reference the Mongoose module in your custom API scripts to access your Mongo database: var mongoose = require('mongoose'); var schema = mongoose.Schema({ text: String, completed: Boolean });   exports.get = function (request, response) {     mongoose.connect('<your Mongo connection string> ');     TodoItemModel = mongoose.model('todoitem', schema);     TodoItemModel.find(function (err, items) {         if (err) {             console.log('error:' + err);             return response.send(500);         }         response.send(200, items);     }); }; Don’t forget to push your changes to your mobile service once you are done > git add . > git commit –m "Switched to use Mongo Labs" > git push Now our Mobile Service app is using Mongo DB! Note, with today’s update usage of custom Node.js modules is limited to Custom API scripts only. We will enable it in all scripts (including data and custom CRON tasks) shortly. New Mobile Services NuGet package, including .NET 4.5 support A few months ago we announced a new pre-release version of the Mobile Services client SDK based on portable class libraries (PCL). Today, we are excited to announce that this new library is now a stable .NET client SDK for mobile services and is no longer a pre-release package. Today’s update includes full support for Windows Store, Windows Phone 7.x, and .NET 4.5, which allows developers to use Mobile Services from ASP.NET or WPF applications. You can install and use this package today via NuGet. Mobile Services and Web Sites: Free 20MB Database for Mobile Services and Web Sites Starting today, every customer of Windows Azure gets one Free 20MB database to use for 12 months free (for both dev/test and production) with Web Sites and Mobile Services. When creating a Mobile Service or a Web Site, simply chose the new “Create a new Free 20MB database” option to take advantage of it: You can use this free SQL Database together with the 10 free Web Sites and 10 free Mobile Services you get with your Windows Azure subscription, or from any other Windows Azure VM or Cloud Service. Notification Hubs: Android Broadcast Push Notification Support Earlier this year, we introduced a new capability in Windows Azure for sending broadcast push notifications at high scale: Notification Hubs. In the initial preview of Notification Hubs you could use this support with both iOS and Windows devices.  Today we’re excited to announce new Notification Hubs support for sending push notifications to Android devices as well. Push notifications are a vital component of mobile applications.  They are critical not only in consumer apps, where they are used to increase app engagement and usage, but also in enterprise apps where up-to-date information increases employee responsiveness to business events.  You can use Notification Hubs to send push notifications to devices from any type of app (a Mobile Service, Web Site, Cloud Service or Virtual Machine). Notification Hubs provide you with the following capabilities: Cross-platform Push Notifications Support. Notification Hubs provide a common API to send push notifications to iOS, Android, or Windows Store at once.  Your app can send notifications in platform specific formats or in a platform-independent way.  Efficient Multicast. Notification Hubs are optimized to enable push notification broadcast to thousands or millions of devices with low latency.  Your server back-end can fire one message into a Notification Hub, and millions of push notifications can automatically be delivered to your users.  Devices and apps can specify a number of per-user tags when registering with a Notification Hub. These tags do not need to be pre-provisioned or disposed, and provide a very easy way to send filtered notifications to an infinite number of users/devices with a single API call.   Extreme Scale. Notification Hubs enable you to reach millions of devices without you having to re-architect or shard your application.  The pub/sub routing mechanism allows you to broadcast notifications in a super-efficient way.  This makes it incredibly easy to route and deliver notification messages to millions of users without having to build your own routing infrastructure. Usable from any Backend App. Notification Hubs can be easily integrated into any back-end server app, whether it is a Mobile Service, a Web Site, a Cloud Service or an IAAS VM. It is easy to configure Notification Hubs to send push notifications to Android. Create a new Notification Hub within the Windows Azure Management Portal (New->App Services->Service Bus->Notification Hub): Then register for Google Cloud Messaging using https://code.google.com/apis/console and obtain your API key, then simply paste that key on the Configure tab of your Notification Hub management page under the Google Cloud Messaging Settings: Then just add code to the OnCreate method of your Android app’s MainActivity class to register the device with Notification Hubs: gcm = GoogleCloudMessaging.getInstance(this); String connectionString = "<your listen access connection string>"; hub = new NotificationHub("<your notification hub name>", connectionString, this); String regid = gcm.register(SENDER_ID); hub.register(regid, "myTag"); Now you can broadcast notification from your .NET backend (or Node, Java, or PHP) to any Windows Store, Android, or iOS device registered for “myTag” tag via a single API call (you can literally broadcast messages to millions of clients you have registered with just one API call): var hubClient = NotificationHubClient.CreateClientFromConnectionString(                   “<your connection string with full access>”,                   "<your notification hub name>"); hubClient.SendGcmNativeNotification("{ 'data' : {'msg' : 'Hello from Windows Azure!' } }", "myTag”); Notification Hubs provide an extremely scalable, cross-platform, push notification infrastructure that enables you to efficiently route push notification messages to millions of mobile users and devices.  It will make enabling your push notification logic significantly simpler and more scalable, and allow you to build even better apps with it. Learn more about Notification Hubs here on MSDN . Summary The above features are now live and available to start using immediately (note: some of the services are still in preview).  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Enabling DNS for IPv6 infrastructure

    After successful automatic distribution of IPv6 address information via DHCPv6 in your local network it might be time to start offering some more services. Usually, we would use host names in order to communicate with other machines instead of their bare IPv6 addresses. During the following paragraphs we are going to enable our own DNS name server with IPv6 address resolving. This is the third article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. What's your name and your IPv6 address? $ sudo service bind9 status * bind9 is running If the service is not recognised, you have to install it first on your system. This is done very easy and quickly like so: $ sudo apt-get install bind9 Once again, there is no specialised package for IPv6. Just the regular application is good to go. But of course, it is necessary to enable IPv6 binding in the options. Let's fire up a text editor and modify the configuration file. $ sudo nano /etc/bind/named.conf.optionsacl iosnet {        127.0.0.1;        192.168.1.0/24;        ::1/128;        2001:db8:bad:a55::/64;};listen-on { iosnet; };listen-on-v6 { any; };allow-query { iosnet; };allow-transfer { iosnet; }; Most important directive is the listen-on-v6. This will enable your named to bind to your IPv6 addresses specified on your system. Easiest is to specify any as value, and named will bind to all available IPv6 addresses during start. More details and explanations are found in the man-pages of named.conf. Save the file and restart the named service. As usual, check your log files and correct your configuration in case of any logged error messages. Using the netstat command you can validate whether the service is running and to which IP and IPv6 addresses it is bound to, like so: $ sudo service bind9 restart $ sudo netstat -lnptu | grep "named\W*$"tcp        0      0 192.168.1.2:53        0.0.0.0:*               LISTEN      1734/named      tcp        0      0 127.0.0.1:53          0.0.0.0:*               LISTEN      1734/named      tcp6       0      0 :::53                 :::*                    LISTEN      1734/named      udp        0      0 192.168.1.2:53        0.0.0.0:*                           1734/named      udp        0      0 127.0.0.1:53          0.0.0.0:*                           1734/named      udp6       0      0 :::53                 :::*                                1734/named   Sweet! Okay, now it's about time to resolve host names and their assigned IPv6 addresses using our own DNS name server. $ host -t aaaa www.6bone.net 2001:db8:bad:a55::2Using domain server:Name: 2001:db8:bad:a55::2Address: 2001:db8:bad:a55::2#53Aliases: www.6bone.net is an alias for 6bone.net.6bone.net has IPv6 address 2001:5c0:1000:10::2 Alright, our newly configured BIND named is fully operational. Eventually, you might be more familiar with the dig command. Here is the same kind of IPv6 host name resolve but it will provide more details about that particular host as well as the domain in general. $ dig @2001:db8:bad:a55::2 www.6bone.net. AAAA More details on the Berkeley Internet Name Domain (bind) daemon and IPv6 are available in Chapter 22.1 of Peter Bieringer's HOWTO on IPv6. Setting up your own DNS zone Now, that we have an operational named in place, it's about time to implement and configure our own host names and IPv6 address resolving. The general approach is to create your own zone database below the bind folder and to add AAAA records for your hosts. In order to achieve this, we have to define the zone first in the configuration file named.conf.local. $ sudo nano /etc/bind/named.conf.local //// Do any local configuration here//zone "ios.mu" {        type master;        file "/etc/bind/zones/db.ios.mu";}; Here we specify the location of our zone database file. Next, we are going to create it and add our host names, our IP and our IPv6 addresses. $ sudo nano /etc/bind/zones/db.ios.mu $ORIGIN .$TTL 259200     ; 3 daysios.mu                  IN SOA  ios.mu. hostmaster.ios.mu. (                                2014031101 ; serial                                28800      ; refresh (8 hours)                                7200       ; retry (2 hours)                                604800     ; expire (1 week)                                86400      ; minimum (1 day)                                )                        NS      server.ios.mu.$ORIGIN ios.mu.server                  A       192.168.1.2server                  AAAA    2001:db8:bad:a55::2client1                 A       192.168.1.3client1                 AAAA    2001:db8:bad:a55::3client2                 A       192.168.1.4client2                 AAAA    2001:db8:bad:a55::4 With a couple of machines in place, it's time to reload that new configuration. Note: Each time you are going to change your zone databases you have to modify the serial information, too. Named loads the plain text zone definitions and converts them into an internal, indexed binary format to improve lookup performance. If you forget to change your serial then named will not use the new records from the text file but the indexed ones. Or you have to flush the index and force a reload of the zone. This can be done easily by either restarting the named: $ sudo service bind9 restart or by reloading the configuration file using the name server control utility - rndc: $ sudo rndc reconfig Check your log files for any error messages and whether the new zone database has been accepted. Next, we are going to resolve a host name trying to get its IPv6 address like so: $ host -t aaaa server.ios.mu. 2001:db8:bad:a55::2Using domain server:Name: 2001:db8:bad:a55::2Address: 2001:db8:bad:a55::2#53Aliases: server.ios.mu has IPv6 address 2001:db8:bad:a55::2 Looks good. Alternatively, you could have just ping'd the system as well using the ping6 command instead of the regular ping: $ ping6 serverPING server(2001:db8:bad:a55::2) 56 data bytes64 bytes from 2001:db8:bad:a55::2: icmp_seq=1 ttl=64 time=0.615 ms64 bytes from 2001:db8:bad:a55::2: icmp_seq=2 ttl=64 time=0.407 ms^C--- ios1 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1001msrtt min/avg/max/mdev = 0.407/0.511/0.615/0.104 ms That also looks promising to me. How about your configuration? Next, it might be interesting to extend the range of available services on the network. One essential service would be to have web sites at hand.

    Read the article

< Previous Page | 86 87 88 89 90 91  | Next Page >