Search Results

Search found 8593 results on 344 pages for 'regular expression'.

Page 314/344 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • BizTalk Envelopes explained

    - by Robert Kokuti
    Recently I've been trying to get some order into an ESB-BizTalk pub/sub scenario, and decided to wrap the payload into standardized envelopes. I have used envelopes before in a 'light weight' fashion, and I found that they can be quite useful and powerful if used systematically. Here is what I learned. The Theory In my experience, Envelopes are often underutilised in a BizTalk solution, and quite often their full potential is not well understood. Here I try to simplify the theory behind the Envelopes within BizTalk.   Envelopes can be used to attach additional data to the ‘real’ data (payload). This additional data can contain all routing and processing information, and allows treating the business data as a ‘black box’, possibly compressed and/or encrypted etc. The point here is that the infrastructure does not need to know anything about the business data content, just as a post man does not need to know the letter within the envelope. BizTalk has built-in support for envelopes through the XMLDisassembler and XMLAssembler pipeline components (these are part of the XMLReceive and XMLSend default pipelines). These components, among other things, perform the following: XMLDisassembler Extracts the payload from the envelope into the Message Body Copies data from the envelope into the message context, as specified by the property schema(s) associated by the envelope schema. Typically, once the envelope is through the XMLDisassembler, the payload is submitted into the Messagebox, and the rest of the envelope data are copied into the context of the submitted message. The XMLDisassembler uses the Property Schemas, referenced by the Envelope Schema, to determine the name of the promoted Message Context element.   XMLAssembler Wraps the Message Body inside the specified envelope schema Populates the envelope values from the message context, as specified by the property schema(s) associated by the envelope schema. Notice that there are no requirements to use the receiving envelope schema when sending. The sent message can be wrapped within any suitable envelope, regardless whether the message was originally received within an envelope or not. However, by sharing Property Schemas between Envelopes, it is possible to pass values from the incoming envelope to the outgoing envelope via the Message Context. The Practice Creating the Envelope Add a new Schema to the BizTalk project:   Envelopes are defined as schemas, with the <Schema> Envelope property set to Yes, and the root node’s Body XPath property pointing to the node which contains the payload. Typically, you’d create an envelope structure similar to this: Click on the <Schema> node and set the Envelope property to Yes. Then, click on the Envelope node, and set the Body XPath property pointing to the ‘Body’ node:   The ‘Body’ node is a Child Element, and its Data Structure Type is set to xs:anyType.  This allows the Body node to carry any payload data. The XMLReceive pipeline will submit the data found in the Body node, while the XMLSend pipeline will copy the message into the Body node, before sending to the destination. Promoting Properties Once you defined the envelope, you may want to promote the envelope data (anything other than the Body) as Property Fields, in order to preserve their value in the message context. Anything not promoted will be lost when the XMLDisassembler extracts the payload from the Body. Typically, this means you promote everything in the Header node. Property promotion uses associated Property Schemas. These are special BizTalk schemas which have a flat field structure. Property Schemas define the name of the promoted values in the Message Context, by combining the Property Schema’s Namespace and the individual Field names. It is worth being systematic when it comes to naming your schemas, their namespace and type name. A coherent method will make your life easier when it comes to referencing the schemas during development, and managing subscriptions (filters) in BizTalk Administration. I developed a fairly coherent naming convention which I’ll probably share in another article. Because the property schema must be flat, I recommend creating one for each level in the envelope header hierarchy. Property schemas are very useful in passing data between incoming as outgoing envelopes. As I mentioned earlier, in/out envelopes do not have to be the same, but you can use the same property schema when you promote the outgoing envelope fields as you used for the incoming schema.  As you can reference many property schemas for field promotion, you can pick data from a variety of sources when you define your outgoing envelope. For example, the outgoing envelope can carry some of the incoming envelope’s promoted values, plus some values from the standard BizTalk message context, like the AdapterReceiveCompleteTime property from the BizTalk message-tracking properties. The values you promote for the outgoing envelope will be automatically populated from the Message Context by the XMLAssembler pipeline component. Using the Envelope Receiving Enveloped messages are automatically recognized by the XMLReceive pipeline, or any other custom pipeline which includes the XMLDisassembler component. The Body Path node will become the Message Body, while the rest of the envelope values will be added to the Message context, as defined by the Property Shemas referenced by the Envelope Schema. Sending The Send Port’s filter expression can use the promoted properties from the incoming envelope. If you want to enclose the sent message within an envelope, the Send Port XMLAssembler component must be configured with the fully qualified envelope name:   One way of obtaining the fully qualified envelope name is copy it off from the envelope schema property page: The full envelope schema name is constructed as <Name>, <Assembly> The outgoing envelope is populated by the XMLAssembler pipeline component. The Message Body is copied to the specified envelope’s Body Path node, while the rest of the envelope fields are populated from the Message Context, according to the Property Schemas associated with the Envelope Schema. That’s all for now, happy enveloping!

    Read the article

  • strange sqares like hints in Silverlight application?

    - by lina
    Good day! Strange square appears on mouse hover on text boxes, buttons, etc (something like hint) in a silverlight navigation application - how can I remove it? a scrin shot an example .xaml page: <Code:BasePage x:Class="CAP.Views.Main" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" xmlns:navigation="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Navigation" xmlns:Code="clr-namespace:CAP.Code" d:DesignWidth="640" d:DesignHeight="480" Title="?????? ??????? ???????? ??? ?????"> <Grid x:Name="LayoutRoot"> <Grid.RowDefinitions> <RowDefinition Height="103*" /> <RowDefinition Height="377*" /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="120*" /> <ColumnDefinition Width="520*" /> </Grid.ColumnDefinitions> <Image Height="85" HorizontalAlignment="Left" Name="image1" Stretch="Fill" VerticalAlignment="Top" Width="84" Margin="12,0,0,0" ImageFailed="image1_ImageFailed" Source="/CAP;component/Images/My-Computer.png" /> <TextBlock Grid.Column="1" Height="Auto" TextWrapping="Wrap" HorizontalAlignment="Left" Margin="0,12,0,0" Name="textBlock1" Text="Good day!" VerticalAlignment="Top" FontFamily="Verdana" FontSize="16" Width="345" FontWeight="Bold" /> <TextBlock Grid.Column="1" Grid.Row="1" TextWrapping="Wrap" Height="299" HorizontalAlignment="Left" Name="textBlock2" VerticalAlignment="Top" FontFamily="Verdana" FontSize="14" Width="441" > <Run Text="Some text "/><LineBreak/><LineBreak/><Run Text="and so on"/> <LineBreak/> </TextBlock> </Grid> xaml.cs: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using System.Windows.Navigation; using CAP.Code; namespace CAP.Views { public partial class Main : BasePage { public Main() : base() { InitializeComponent(); MapBuilder.AddToMap(new SiteMapUnit() { Caption = "???????", RelativeUrl = "Main" },true); ((App)Application.Current).Mainpage.tvMainMenu.SelectedItems.Clear(); } // Executes when the user navigates to this page. protected override void OnNavigatedTo(NavigationEventArgs e) { } private void image1_ImageFailed(object sender, ExceptionRoutedEventArgs e) { } protected override string[] NeededPermission() { return new string[0]; } } } MainPage.xaml <UserControl x:Class="CAP.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Code="clr-namespace:CAP.Code" xmlns:navigation="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Navigation" xmlns:uriMapper="clr-namespace:System.Windows.Navigation;assembly=System.Windows.Controls.Navigation" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:telerik="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls" xmlns:telerikNavigation="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.Navigation" mc:Ignorable="d" Margin="0,0,0,0" Width="auto" Height="auto" xmlns:dataInput="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data.Input"> <ScrollViewer Width="auto" Height="auto" BorderBrush="White" BorderThickness="0" Margin="0,0,0,0" x:Name="sV" HorizontalScrollBarVisibility="Auto" VerticalScrollBarVisibility="Auto" > <ScrollViewer.Content> <Grid Width="auto" Height="auto" x:Name="LayoutRoot" Style="{StaticResource LayoutRootGridStyle}" Margin="0,0,0,0"> <StackPanel Width="auto" Height="auto" Orientation="Vertical" Margin="250,0,0,50"> <Border x:Name="ContentBorder2" Margin="0,0,0,0" > <!--<navigation:Frame Margin="0,0,0,0" Width="auto" Height="auto" x:Name="AnotherFrame" VerticalAlignment="Top" Style="{StaticResource ContentFrameStyle}" Source="/Views/Menu.xaml" NavigationFailed="ContentFrame_NavigationFailed" JournalOwnership="OwnsJournal" Loaded="AnotherFrame_Loaded"> </navigation:Frame>--> <StackPanel Orientation="Vertical" Height="82" Width="Auto" HorizontalAlignment="Right" Margin="0,0,0,0" DataContext="{Binding}"> <TextBlock HorizontalAlignment="Right" Foreground="White" x:Name="ApplicationNameTextBlock4" Style="{StaticResource ApplicationNameStyle}" FontSize="20" Text="?????? ???????" Margin="20,16,20,0"/> <StackPanel Orientation="Horizontal" HorizontalAlignment="Right"> <Image x:Name="imDoor" Visibility="Collapsed" MouseEnter="imDoor_MouseEnter" MouseLeave="imDoor_MouseLeave" Height="24" Stretch="Fill" Width="25" Margin="10,0,10,0" Source="/CAP;component/Images/sm_white_doors.png" MouseLeftButtonDown="bTest_Click" /> <TextBlock x:Name="bLogout" MouseEnter="bLogout_MouseEnter" MouseLeave="bLogout_MouseLeave" TextDecorations="Underline" Margin="0,6,20,4" Height="23" Text="?????" HorizontalAlignment="Right" Visibility="Collapsed" MouseLeftButtonDown="bTest_Click" FontFamily="Verdana" FontSize="13" FontWeight="Normal" Foreground="#FF1C1C92" /> </StackPanel> </StackPanel> </Border> <Border x:Name="bSiteMap" Margin="0,0,0,0" > <StackPanel x:Name="spSiteMap" Orientation="Horizontal" Height="20" Width="Auto" HorizontalAlignment="Left" Margin="0,0,0,0" DataContext="{Binding}"> <!-- <TextBlock Visibility="Visible" TextDecorations="Underline" Height="23" HorizontalAlignment="Left" x:Name="ar" Text="1" VerticalAlignment="Top" Foreground="Blue" FontFamily="Verdana" FontSize="13" /> <TextBlock Visibility="Visible" Height="23" HorizontalAlignment="Left" x:Name="Map" Text="->" VerticalAlignment="Top" Foreground="Blue" FontFamily="Verdana" FontSize="13" /> <TextBlock Visibility="Visible" TextDecorations="Underline" Height="23" HorizontalAlignment="Left" x:Name="ar1" Text="2" VerticalAlignment="Top" Foreground="Blue" FontFamily="Verdana" FontSize="13" /> <TextBlock Visibility="Visible" Height="23" HorizontalAlignment="Left" x:Name="Map1" Text="->" VerticalAlignment="Top" Foreground="Blue" FontFamily="Verdana" FontSize="13" /> <TextBlock Visibility="Visible" TextDecorations="Underline" Height="23" HorizontalAlignment="Left" x:Name="ar2" Text="3" VerticalAlignment="Top" Foreground="Blue" FontFamily="Verdana" FontSize="13" />--> </StackPanel> </Border> <Border Width="auto" Height="auto" x:Name="ContentBorder" Margin="0,0,0,0" > <navigation:Frame x:Name="ContentFrame" Style="{StaticResource ContentFrameStyle}" Source="Main" Navigated="ContentFrame_Navigated" NavigationFailed="ContentFrame_NavigationFailed" ToolTipService.ToolTip=" " Margin="0,0,0,0"> <navigation:Frame.UriMapper> <uriMapper:UriMapper> <!--Client--> <uriMapper:UriMapping Uri="RegistrateClient" MappedUri="/Views/Client/RegistrateClient.xaml"/> <!--So on--> </uriMapper:UriMapper> </navigation:Frame.UriMapper> </navigation:Frame> </Border> </StackPanel> <Grid x:Name="NavigationGrid" Style="{StaticResource NavigationGridStyle}" Margin="0,0,0,0" Background="{x:Null}" > <StackPanel Orientation="Vertical" Height="Auto" Width="250" HorizontalAlignment="Center" Margin="0,0,0,50" DataContext="{Binding}"> <Image Width="150" Height="90" HorizontalAlignment="Center" VerticalAlignment="Top" Source="/CAP;component/Images/logo__au.png" Margin="0,20,0,70"/> <Border x:Name="BrandingBorder" MinHeight="222" Width="250" Style="{StaticResource BrandingBorderStyle3}" HorizontalAlignment="Center" Opacity="60" Margin="0,0,0,0"> <Border.Background> <ImageBrush ImageSource="/CAP;component/Images/papka.png"/> </Border.Background> <Grid Width="250" x:Name="LichniyCabinet" Margin="0,10,0,0" HorizontalAlignment="Center" Height="211"> <Grid.ColumnDefinitions> <ColumnDefinition Width="19*" /> <ColumnDefinition Width="62*" /> <ColumnDefinition Width="151*" /> <ColumnDefinition Width="18*" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="13" /> <RowDefinition Height="24" /> <RowDefinition Height="35" /> <RowDefinition Height="35" /> <RowDefinition Height="43" /> <RowDefinition Height="28" /> <RowDefinition Height="32*" /> </Grid.RowDefinitions> <TextBlock Visibility="Visible" Grid.Row="2" Height="23" HorizontalAlignment="Left" x:Name="tLogin" Text="?????" VerticalAlignment="Top" FontFamily="Verdana" FontSize="13" Foreground="White" Margin="1,0,0,0" Grid.Column="1" /> <TextBlock Visibility="Visible" FontFamily="Verdana" FontSize="13" Foreground="White" Height="23" HorizontalAlignment="Left" x:Name="tPassw" Text="??????" VerticalAlignment="Top" Grid.Row="3" Grid.Column="1" /> <TextBox Visibility="Visible" Grid.Column="2" Grid.Row="2" Height="24" HorizontalAlignment="Left" x:Name="logLogin" VerticalAlignment="Top" Width="150" /> <PasswordBox Visibility="Visible" Code:DefaultButtonService.DefaultButton="{Binding ElementName=bLogin}" PasswordChar="*" Height="24" HorizontalAlignment="Left" x:Name="logPassword" VerticalAlignment="Top" Width="150" Grid.Column="2" Grid.Row="3" /> <Button x:Name="bLogin" MouseEnter="bLogin_MouseEnter" MouseLeave="bLogin_MouseLeave" Visibility="Visible" Content="?????" Grid.Column="2" Grid.Row="4" Click="Button_Click" Height="23" HorizontalAlignment="Left" Margin="81,0,0,0" VerticalAlignment="Top" Width="70" /> <TextBlock MouseLeftButtonDown="ForgotPassword_MouseLeftButtonDown" MouseEnter="ForgotPassword_MouseEnter" MouseLeave="ForgotPassword_MouseLeave" Visibility="Visible" TextDecorations="Underline" Grid.ColumnSpan="2" Grid.Row="4" Height="23" HorizontalAlignment="Left" x:Name="ForgotPassword" Text="?????? ???????" VerticalAlignment="Top" Foreground="White" FontFamily="Verdana" FontSize="13" Grid.Column="1" /> <TextBlock MouseEnter="tbRegistration_MouseEnter" MouseLeave="tbRegistration_MouseLeave" MouseLeftButtonDown="tbRegistration_MouseLeftButtonDown" Grid.Column="2" Grid.Row="6" Height="23" x:Name="tbRegistration" TextDecorations="Underline" Text="???????????" VerticalAlignment="Top" FontFamily="Verdana" FontSize="13" TextAlignment="Center" HorizontalAlignment="Center" Foreground="#FF1C1C92" FontWeight="Normal" Margin="0,0,57,0" /> <TextBlock Cursor="Arrow" Height="23" HorizontalAlignment="Left" Margin="11,-3,0,0" Text="?????? ???????" VerticalAlignment="Top" Grid.ColumnSpan="3" Grid.RowSpan="2" FontFamily="Verdana" FontSize="13" FontWeight="Bold" Foreground="White" /> <Image Visibility="Collapsed" Height="70" x:Name="imUser" Stretch="Fill" Width="70" Grid.ColumnSpan="2" Margin="11,0,0,0" Grid.Row="2" Grid.RowSpan="2" Source="/CAP;component/Images/user2.png" /> <TextBlock x:Name="tbHello" Grid.Column="2" Visibility="Collapsed" Grid.Row="2" Height="auto" TextWrapping="Wrap" HorizontalAlignment="Left" Margin="6,0,0,0" Text="" VerticalAlignment="Top" FontFamily="Verdana" FontSize="13" Foreground="White" Width="145" /> </Grid> </Border> <Border x:Name="MenuBorder" Margin="0,0,0,50" Width="250" Visibility="Collapsed"> <StackPanel x:Name="spMenu" Width="240" HorizontalAlignment="Left"> <telerikNavigation:RadTreeView x:Name="tvMainMenu" Width="240" Selected="TreeView1_Selected" SelectedValuePath="Text" telerik:Theming.Theme="Windows7" FontFamily="Verdana" FontSize="12"/> </StackPanel> </Border> </StackPanel> </Grid> <Border x:Name="FooterBorder" VerticalAlignment="Bottom" Width="auto" Height="76"> <Border.Background> <ImageBrush ImageSource="/CAP;component/Images/footer2.png" /> </Border.Background> <TextBlock x:Name="tbFooter" Height="24" Width="auto" Margin="0,20,0,0" TextAlignment="Center" HorizontalAlignment="Stretch" VerticalAlignment="Center" Foreground="White" FontFamily="Verdana" FontSize="11"> </TextBlock> </Border> </Grid> </ScrollViewer.Content> </ScrollViewer> </UserControl> MainPage.xaml.cs using System; using System.Collections.Generic; using System.Linq; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Navigation; using CAP.Code; using CAP.Registrator; using System.Windows.Input; using System.ComponentModel.DataAnnotations; using System.Windows.Browser; using Telerik.Windows.Controls; using System.Net; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Navigation; using System.Windows.Shapes; namespace CAP { public partial class MainPage { public App Appvars = Application.Current as App; private readonly RegistratorClient registrator; public SiteMapBuilder builder; public MainPage() { InitializeComponent(); sV.SetIsMouseWheelScrollingEnabled(true); builder = new SiteMapBuilder(spSiteMap); try { //working with service } catch { this.ContentFrame.Navigate(new Uri(String.Format("ErrorPage"), UriKind.RelativeOrAbsolute)); } } /// Recursive method to update the correct scrollviewer (if exists) private ScrollViewer CheckParent(FrameworkElement element) { ScrollViewer _result = element as ScrollViewer; if (element != null && _result == null) { FrameworkElement _temp = element.Parent as FrameworkElement; _result = CheckParent(_temp); } return _result; } // If an error occurs during navigation, show an error window private void ContentFrame_NavigationFailed(object sender, NavigationFailedEventArgs e) { e.Handled = true; ChildWindow errorWin = new ErrorWindow(e.Uri); errorWin.Show(); } } }

    Read the article

  • Sensible Way to Pass Web Data in XML to a SQL Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • ASP.NET Web Forms Extensibility: Handler Factories

    - by Ricardo Peres
    An handler factory is the class that implements IHttpHandlerFactory and is responsible for instantiating an handler (IHttpHandler) that will process the current request. This is true for all kinds of web requests, whether they are for ASPX pages, ASMX/SVC web services, ASHX/AXD handlers, or any other kind of file. Also used for restricting access for certain file types, such as Config, Csproj, etc. Handler factories are registered on the global Web.config file, normally located at %WINDIR%\Microsoft.NET\Framework<x64>\vXXXX\Config for a given path and request type (GET, POST, HEAD, etc). This goes on section <httpHandlers>. You would create a custom handler factory for a number of reasons, let me list just two: A centralized place for using dependency injection; Also a centralized place for invoking custom methods or performing some kind of validation on all pages. Let’s see an example using Unity for injecting dependencies into a page, suppose we have this on Global.asax.cs: 1: public class Global : HttpApplication 2: { 3: internal static readonly IUnityContainer Unity = new UnityContainer(); 4: 5: void Application_Start(Object sender, EventArgs e) 6: { 7: Unity.RegisterType<IFunctionality, ConcreteFunctionality>(); 8: } 9: } We instantiate Unity and register a concrete implementation for an interface, this could/should probably go in the Web.config file. Forget about its actual definition, it’s not important. Then, we create a custom handler factory: 1: public class UnityPageHandlerFactory : PageHandlerFactory 2: { 3: public override IHttpHandler GetHandler(HttpContext context, String requestType, String virtualPath, String path) 4: { 5: IHttpHandler handler = base.GetHandler(context, requestType, virtualPath, path); 6: 7: //one scenario: inject dependencies 8: Global.Unity.BuildUp(handler.GetType(), handler, String.Empty); 9:  10: return (handler); 11: } 12: } It inherits from PageHandlerFactory, which is .NET’s included factory for building regular ASPX pages. We override the GetHandler method and issue a call to the BuildUp method, which will inject required dependencies, if any exist. An example page with dependencies might be: 1: public class SomePage : Page 2: { 3: [Dependency] 4: public IFunctionality Functionality 5: { 6: get; 7: set; 8: } 9: } Notice the DependencyAttribute, it is used by Unity to identify properties that require dependency injection. When BuildUp is called, the Functionality property (or any other properties with the DependencyAttribute attribute) will receive the concrete implementation associated with it’s type, as registered on Unity. Another example, checking a page for authorization. Let’s define an interface first: 1: public interface IRestricted 2: { 3: Boolean Check(HttpContext ctx); 4: } An a page implementing that interface: 1: public class RestrictedPage : Page, IRestricted 2: { 3: public Boolean Check(HttpContext ctx) 4: { 5: //check the context and return a value 6: return ...; 7: } 8: } For this, we would use an handler factory such as this: 1: public class RestrictedPageHandlerFactory : PageHandlerFactory 2: { 3: private static readonly IHttpHandler forbidden = new UnauthorizedHandler(); 4:  5: public override IHttpHandler GetHandler(HttpContext context, String requestType, String virtualPath, String path) 6: { 7: IHttpHandler handler = base.GetHandler(context, requestType, virtualPath, path); 8: 9: if (handler is IRestricted) 10: { 11: if ((handler as IRestricted).Check(context) == false) 12: { 13: return (forbidden); 14: } 15: } 16:  17: return (handler); 18: } 19: } 20:  21: public class UnauthorizedHandler : IHttpHandler 22: { 23: #region IHttpHandler Members 24:  25: public Boolean IsReusable 26: { 27: get { return (true); } 28: } 29:  30: public void ProcessRequest(HttpContext context) 31: { 32: context.Response.StatusCode = (Int32) HttpStatusCode.Unauthorized; 33: context.Response.ContentType = "text/plain"; 34: context.Response.Write(context.Response.Status); 35: context.Response.Flush(); 36: context.Response.Close(); 37: context.ApplicationInstance.CompleteRequest(); 38: } 39:  40: #endregion 41: } The UnauthorizedHandler is an example of an IHttpHandler that merely returns an error code to the client, but does not cause redirection to the login page, it is included merely as an example. One thing we must keep in mind is, there can be only one handler factory registered for a given path/request type (verb) tuple. A typical registration would be: 1: <httpHandlers> 2: <remove path="*.aspx" verb="*"/> 3: <add path="*.aspx" verb="*" type="MyNamespace.MyHandlerFactory, MyAssembly"/> 4: </httpHandlers> First we remove the previous registration for ASPX files, and then we register our own. And that’s it. A very useful mechanism which I use lots of times.

    Read the article

  • Sensible Way to Pass Web Data to Sql Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Great event : Microsoft Visual Studio 2010 Launch @ Microsoft TechEd Blore

    - by sathya
    Great event : Microsoft Visual Studio 2010 Launch @ Microsoft TechEd Blore   I was really excited on attending the day 1 of Microsoft TechEd 2010 in Bangalore. This is the first Teched that am attending. The event was really fun filled with lot of knowledge sharing sessions and lots of goodies and gifts by the partners Initially the Event Started by Murthy's Session. He explained about the Developers relating to the 5 elements of nature (Pancha Boothaas) 1. Fire - Passion 2. Wave (Water) - Catch the right wave which we need to apply. 3. Earth - Connections and lots of opportunities around the world 4. Air -  Its whatever we breathe. Developers.. Without them nothing is possible. they are like the air 5. Sky - Cloud based applications   Next the Keynote and the announcement of Visual Studio by SomaSegar. List of things that he delivered his speech on : 1. Announcement of Visual Studio 2010 2. Announcement of .NET 4.0 3. Announcement of Silverlight later this week 4. What is the current Trend? Microsoft has done a research with many developers across the globe and have got the following feedback from the users. Get Lost (interrupted) - When we do some work and somebody is calling or interrepting by someother way we lose track of what we were doing and we need to do from the start Falling Behind- Technology gets updated  phenomenally over a period of time and developers always have a scenario like they are not in the state of the art technology and they always have a doubt whether they are staying updated. Lack of Collobaration - When a Manager asks a person what the team members have done and some might be done and some might not be and finally all are into a state like we dont know where we are. So they have addressed these 3 points in the VS 2010 by the following features : Get Lost - Some cool features which could overcome this. We have some Graphical interface. which could show what we have done and where we are. Some Zoom features in the code level. Falling Behind - Everything is based on .NET language base. 2010 has been built in such a way that if developers know the native language that's enough for building good applications. Lack of Collobaration - Some Dashboard Features which would show where exactly the project is. And a graphical user interface is shown on clicking which it directly drills down even to the code level. 5. An overview on all new features in VS 2010. 6. Some good demos of new features in VS 2010 by Polita and one more girl. Some of the new features included : 1. Team Explorer 2. Zoom in Code 3. Ribbon Development 4. Development in Single Platform for Windows Phone, XBox, Zune, Azure, Web Based and Windows based applications 5. Sequence Diagram Generation directly from code 6. Dashboards to show project status 7. Javascript and JQuery intellisense 8. Native support for JQuery 9. Packaging feature while deploying. 10. Generation of different versions of web.config like Web.Config.Production, Web.Config.Staging, etc. 11. IntelliTrace - Eliminating the "Not Reproducible" statement. 12. Automated User Interface Testing. At last in the closing of the day we had a great event called Demo Extravaganza, where lot of cool projects that were launched by Microsoft and also the projects that are under research were also shown. I got a lot of info about Bing today. BING really rocks!!! It has the following : 1. Visual Search 2. Product based search. For each product different menu filters were provided to make an advanced search 3. BING Maps was awesome!! It zoomed in to the street level and we can assume that we are the persons who are walking or running on the road and we can see the real objects like buildings moving by our side. 4. PhotoSynth was used in BING to show up all the images taken around the globe in a 3D format. 5. Formula - If we give some formula it automatically gives the value for the variable or derivation of expression Also some info about some kool touch apps which does an authentication and computation of Teched Attendee's Points that they have scored and the sessions attended. One guy won an XBOX in lucky draw as a gift. There were lot of Partner Stalls like Accenture,Intel,Citrix,MicroFocus,Telerik,infragistics,Sapient etc. Some Offers were provided for us like 50% off on Certifications, 1 free Elearning Course, etc. Stay tuned!! Wil update you on other events too..

    Read the article

  • Upgrading SSIS Custom Components for SQL Server 2012

    Having finally got around to upgrading my custom components to SQL Server 2012, I thought I’d share some notes on the process. One of the goals was minimal duplication, so the same code files are used to build the 2008 and 2012 components, I just have a separate project file. The high level steps are listed below, followed by some more details. Create a 2012 copy of the project file Upgrade project, just open the new project file is VS2010 Change target framework to .NET 4.0 Set conditional compilation symbol for DENALI Change any conditional code, including assembly version and UI type name Edit project file to change referenced assemblies for 2012 Change target framework to .NET 4.0 Open the project properties. On the Applications page, change the Target framework to .NET Framework 4. Set conditional compilation symbol for DENALI Re-open the project properties. On the Build tab, first change the Configuration to All Configurations, then set a Conditional compilation symbol of DENALI. Change any conditional code, including assembly version and UI type name The value doesn’t have to be DENALI, it can actually be anything you like, that is just what I use. It is how I control sections of code that vary between versions. There were several API changes between 2005 and 2008, as well as interface name changes. Whilst we don’t have the same issues between 2008 and 2012, I still have some sections of code that do change such as the assembly attributes. #if DENALI [assembly: AssemblyDescription("Data Generator Source for SQL Server Integration Services 2012")] [assembly: AssemblyCopyright("Copyright © 2012 Konesans Ltd")] [assembly: AssemblyVersion("3.0.0.0")] #else [assembly: AssemblyDescription("Data Generator Source for SQL Server Integration Services 2008")] [assembly: AssemblyCopyright("Copyright © 2008 Konesans Ltd")] [assembly: AssemblyVersion("2.0.0.0")] #endif The Visual Studio editor automatically formats the code based on the current compilation symbols, hence in this case the 2008 code is grey to indicate it is disabled. As you can see in the previous example I have distinct assembly version attributes, ensuring I can run both 2008 and 2012 versions of my component side by side. For custom components with a user interface, be sure to update the UITypeName property of the DtsTask or DtsPipelineComponent attributes. As above I use the conditional compilation symbol to control the code. #if DENALI [DtsTask ( DisplayName = "File Watcher Task", Description = "File Watcher Task", IconResource = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask.ico", UITypeName = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTaskUI,Konesans.Dts.Tasks.FileWatcherTask,Version=3.0.0.0,Culture=Neutral,PublicKeyToken=b2ab4a111192992b", TaskContact = "File Watcher Task; Konesans Ltd; Copyright © 2012 Konesans Ltd; http://www.konesans.com" )] #else [DtsTask ( DisplayName = "File Watcher Task", Description = "File Watcher Task", IconResource = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask.ico", UITypeName = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTaskUI,Konesans.Dts.Tasks.FileWatcherTask,Version=2.0.0.0,Culture=Neutral,PublicKeyToken=b2ab4a111192992b", TaskContact = "File Watcher Task; Konesans Ltd; Copyright © 2004-2008 Konesans Ltd; http://www.konesans.com" )] #endif public sealed class FileWatcherTask: Task, IDTSComponentPersist, IDTSBreakpointSite, IDTSSuspend { // .. code goes on... } Shown below is another example I found that needed changing. I borrow one of the MS editors, and use it against a custom property, but need to ensure I reference the correct version of the MS controls assembly. This section of code is actually shared between the 2005, 2008 and 2012 versions of my component hence it has test for both DENALI and KATMAI symbols. #if DENALI const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=11.0.00.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #elif KATMAI const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #else const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #endif // Create Match Expression parameter IDTSCustomPropertyCollection100 propertyCollection = outputColumn.CustomPropertyCollection; IDTSCustomProperty100 property = propertyCollection.New(); property = propertyCollection.New(); property.Name = MatchParams.Name; property.Description = MatchParams.Description; property.TypeConverter = typeof(MultilineStringConverter).AssemblyQualifiedName; property.UITypeEditor = multiLineUI; property.Value = MatchParams.DefaultValue; Edit project file to change referenced assemblies for 2012 We now need to edit the project file itself. Open the MyComponente2012.cproj  in you favourite text editor, and then perform a couple of find and replaces as listed below: Find Replace Comment Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Change the assembly references version from SQL Server 2008 to SQL Server 2012. Microsoft SQL Server\100\ Microsoft SQL Server\110\ Change any assembly reference hint path locations from from SQL Server 2008 to SQL Server 2012. If you use any Build Events during development, such as copying the component assembly to the DTS folder, or calling GACUTIL to install it into the GAC, you can also change these now. An example of my new post-build event for a pipeline component is shown below, which uses the .NET 4.0 path for GACUTIL. It also uses the 110 folder location, instead of 100 for SQL Server 2008, but that was covered the the previous find and replace. "C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\NETFX 4.0 Tools\gacutil.exe" /if "$(TargetPath)" copy "$(TargetPath)" "%ProgramFiles%\Microsoft SQL Server\110\DTS\PipelineComponents" /Y

    Read the article

  • Tuning Red Gate: #2 of Many

    - by Grant Fritchey
    In the last installment, I used the SQL Monitor tool to get a snapshot view of the current state of the servers at Red Gate that are giving us trouble. That snapshot suggested some areas where I should focus some time, primarily in which queries were being called most frequently or were running the longest. But, you don't want to just run off & start tuning queries. Remember, the foundation for query tuning is the server itself. So, I want to be sure I'm not looking at some major hardware or configuration issues that I need to address first. Rather than look at the current status of the server, I'm going to look at historical data. Clicking on the Analysis tab of SQL Monitor I get a whole list of counters that I can look at. More importantly, I can look at them over a period of time. Even more importantly, I can compare past periods with current periods to see if we're looking at a progressive issue or not. There are counters here that will give me an indication of load, and there are counters here that will tell me specifics about that load. First, I want to just look at the load to understand where the pain points might be. Trying to drill down before you have detailed information is just bad planning. First thing I'm going to check is the CPU, just to see what's up there. I have two servers I'm interested in, so I'll show you both: Looking at the last 30 days for both servers, well, let's just say that the first server is about what I would expect. It has an average baseline behavior with occasional, regular, peaks. This looks like a system with a fairly steady & predictable load that probably has a nightly batch process that spikes the processor. In short, normal stuff. The points there where the CPU drops radically. that might be worth investigating further because something changed the processing on this system a lot. But the first server. It's all over the place. There's no steady CPU behavior at all. It's spike high for long periods of time. It's up, it's down. I'm really going to have to spend time looking at CPU issues on this server to try to figure out what's up. It might be other processes being shared on the server, it might be something else. Either way, I'm going to have to spend time evaluating this CPU, especially those peeks about a week ago. Looking at the Pages/sec, again, just a measure of load, I see that there are some peaks on the rg-sql02 server, but over all, it looks like a fairly standard load. Plus, the peaks are only up to 550 pages/sec. Remember, this isn't a performance measure, but just a load measurement, but from this, I don't think we're looking at major memory issues, but I may want to correlate these counters with the CPU counters. Again, the other server looks like there's stuff going on. The load is not at all consistent. In fact there was a point earlier in the year that looks pretty severe. Plus the spikes here are twice the size of the other system. We've got a lot more load going on here and I will probably need to drill down on memory usage on this server. Taking a look at the disk transfers/sec the load on both systems seems to roughly correspond to the other load indicators. Notice that drop right in the middle of the graph for rg-sql02. I wonder if the office was closed over that period or a system was down for maintenance. If I saw spikes in memory or disk that corresponded to the drip in CPU, you can assume something was using those other resources and causing a drop, but when everything goes down, it just means that the system isn't gettting used. The disk on the rg-sql01 system isn't spiking exactly the same way as the memory & cpu, so there's a good chance (chance mind you) that any performance issues might not be disk related. However, notice that huge jump at the beginning of the month. Several disks were used more than they were for the rest of the month. That's the load on the server. What about the load on SQL Server itself? Next time.

    Read the article

  • Off center projection

    - by N0xus
    I'm trying to implement the code that was freely given by a very kind developer at the following link: http://forum.unity3d.com/threads/142383-Code-sample-Off-Center-Projection-Code-for-VR-CAVE-or-just-for-fun Right now, all I'm trying to do is bring it in on one camera, but I have a few issues. My class, looks as follows: using UnityEngine; using System.Collections; public class PerspectiveOffCenter : MonoBehaviour { // Use this for initialization void Start () { } // Update is called once per frame void Update () { } public static Matrix4x4 GeneralizedPerspectiveProjection(Vector3 pa, Vector3 pb, Vector3 pc, Vector3 pe, float near, float far) { Vector3 va, vb, vc; Vector3 vr, vu, vn; float left, right, bottom, top, eyedistance; Matrix4x4 transformMatrix; Matrix4x4 projectionM; Matrix4x4 eyeTranslateM; Matrix4x4 finalProjection; ///Calculate the orthonormal for the screen (the screen coordinate system vr = pb - pa; vr.Normalize(); vu = pc - pa; vu.Normalize(); vn = Vector3.Cross(vr, vu); vn.Normalize(); //Calculate the vector from eye (pe) to screen corners (pa, pb, pc) va = pa-pe; vb = pb-pe; vc = pc-pe; //Get the distance;; from the eye to the screen plane eyedistance = -(Vector3.Dot(va, vn)); //Get the varaibles for the off center projection left = (Vector3.Dot(vr, va)*near)/eyedistance; right = (Vector3.Dot(vr, vb)*near)/eyedistance; bottom = (Vector3.Dot(vu, va)*near)/eyedistance; top = (Vector3.Dot(vu, vc)*near)/eyedistance; //Get this projection projectionM = PerspectiveOffCenter(left, right, bottom, top, near, far); //Fill in the transform matrix transformMatrix = new Matrix4x4(); transformMatrix[0, 0] = vr.x; transformMatrix[0, 1] = vr.y; transformMatrix[0, 2] = vr.z; transformMatrix[0, 3] = 0; transformMatrix[1, 0] = vu.x; transformMatrix[1, 1] = vu.y; transformMatrix[1, 2] = vu.z; transformMatrix[1, 3] = 0; transformMatrix[2, 0] = vn.x; transformMatrix[2, 1] = vn.y; transformMatrix[2, 2] = vn.z; transformMatrix[2, 3] = 0; transformMatrix[3, 0] = 0; transformMatrix[3, 1] = 0; transformMatrix[3, 2] = 0; transformMatrix[3, 3] = 1; //Now for the eye transform eyeTranslateM = new Matrix4x4(); eyeTranslateM[0, 0] = 1; eyeTranslateM[0, 1] = 0; eyeTranslateM[0, 2] = 0; eyeTranslateM[0, 3] = -pe.x; eyeTranslateM[1, 0] = 0; eyeTranslateM[1, 1] = 1; eyeTranslateM[1, 2] = 0; eyeTranslateM[1, 3] = -pe.y; eyeTranslateM[2, 0] = 0; eyeTranslateM[2, 1] = 0; eyeTranslateM[2, 2] = 1; eyeTranslateM[2, 3] = -pe.z; eyeTranslateM[3, 0] = 0; eyeTranslateM[3, 1] = 0; eyeTranslateM[3, 2] = 0; eyeTranslateM[3, 3] = 1f; //Multiply all together finalProjection = new Matrix4x4(); finalProjection = Matrix4x4.identity * projectionM*transformMatrix*eyeTranslateM; //finally return return finalProjection; } // Update is called once per frame public void FixedUpdate () { Camera cam = camera; //calculate projection Matrix4x4 genProjection = GeneralizedPerspectiveProjection( new Vector3(0,1,0), new Vector3(1,1,0), new Vector3(0,0,0), new Vector3(0,0,0), cam.nearClipPlane, cam.farClipPlane); //(BottomLeftCorner, BottomRightCorner, TopLeftCorner, trackerPosition, cam.nearClipPlane, cam.farClipPlane); cam.projectionMatrix = genProjection; } } My error lies in projectionM = PerspectiveOffCenter(left, right, bottom, top, near, far); The debugger states: Expression denotes a `type', where a 'variable', 'value' or 'method group' was expected. Thus, I changed the line to read: projectionM = new PerspectiveOffCenter(left, right, bottom, top, near, far); But then the error is changed to: The type 'PerspectiveOffCenter' does not contain a constructor that takes '6' arguments. For reasons that are obvious. So, finally, I changed the line to read: projectionM = new GeneralizedPerspectiveProjection(left, right, bottom, top, near, far); And the error I get is: is a 'method' but a 'type' was expected. With this last error, I'm not sure what it is I should do / missing. Can anyone see what it is that I'm missing to fix this error?

    Read the article

  • Modifying Service URLs with LINQ to Twitter

    - by Joe Mayo
    It’s funny that two posts so close together speak about flexibility with the LINQ to Twitter provider.  There are certain things you know from experience on when to make software more flexible and when to save time.  This is another one of those times when I got lucky and made the right choice up front. I’m talking about the ability to switch URLs. It only makes sense that Twitter should begin versioning their API as it matures.  In fact, most of the entire API has moved to the v1 URL at “https://api.twitter.com/1/”, except for search and trends.  Recently, Twitter introduced the available and local trends, but hung them off the new v1, and left the rest of the trends API on the old URL. To implement this, I muscled my way into the expression tree during CreateRequestProcessor to figure out which trend I was dealing with; perhaps not elegant, but the code is in the right place and that’s what factories are for.  Anyway, the point is that I wouldn’t have to do this kind of stuff (as much fun as it is), if Twitter would have more consistency. Having went to Chirp last week and seeing the evolution of the API, it looks like my wish is coming true.  …now if they would just get their stuff together on the mess they made with geo-location and places… but again, that’s all transparent if your using LINQ to Twitter because I pulled all of that together in a consistent way so that you don’t have to. Normally, when Twitter makes a change, code breaks and I have to scramble to get the fixes in-place.  This time, in the case of a URL change, the adjustment is easy and no-one has to wait for me.  Essentially, all you need to do is change the URL passed to the TwitterContext constructor.  Here’s an example of instantiating a TwitterContext now: using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/")) The third parameter constructor is the SearchUrl, which is used for Search and Trend APIs. You probably know what’s coming next; another constructor, but with the SearchUrl parameter set to the new URL as follows: using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://api.twitter.com/1/")) One consequence of setting the URL this way is that you set the URL for both Trends and Search.  Since Search is still using the old URL, this is going to break for Search queries. You could always instantiate a special TwitterContext instance for Search queries, with the old URL set. Alternatively, you can use the TwitterContext’s SearchUrl property. Here’s an example: twitterCtx.SearchUrl = "https://api.twitter.com/1/"; var trends = (from trend in twitterCtx.Trends where trend.Type == TrendType.Daily && trend.Date == DateTime.Now.AddDays(-2).Date select trend) .ToList(); Notice how I set the SearchUrl property just-in-time for the query. This allows you to target the URL for each specific query. Whichever way you prefer to configure the URL, it’s your choice. So, now you know how to set the URL to be used for Trend queries and how to prevent whacking your Search queries. I’ll be updating the Trend API to use same URL as all other APIs soon, so the only API left to use the SearchUrl will be Search, but for the short term, it’s Trends and Search. Until I make this change, you’ll have a viable work-around by setting the URL yourself, as explained above. These were the Search and Trend URLs, but you might be curious about the second parameter of the TwitterContext constructor; that’s the URL for all other APIs (the BaseUrl), except for Trend and Search. Similarly, you can use the TwitterContext’s BaseUrl property to set the BaseUrl. Setting the BaseUrl can be useful when communicating with other services. In addition to Twitter changing URLs, the Twitter API has been adopted by other companies, such as Identi.ca, Tumblr, and  WordPress.  This capability lets you use LINQ to Twitter with any of these services.  This is a testament to the success of the Twitter API and it’s popularity. No doubt we’ll have hills and valleys to traverse as the Twitter API matures, but hopefully there will be enough flexibility in LINQ to Twitter to make these changes as transparent as possible for you. @JoeMayo

    Read the article

  • SQLAuthority Guest Post – Lessons from Life and Work by Srini Chandra (Author of 3 Lives, in search of bliss)

    - by pinaldave
    Work and life are confusing terms together. How can one consider work outside of life. Work should be part of life or are we considering ourselves dead when we are at work. I have often seen developers and DBA complaining and confused about their job, work and life. Complaining is easy and everyone can do. I have heard quite often expression – “I do not have any other option.” I requested Srini Chanda (renowned author of Amazon Best Seller 3 Lives, in search of bliss (Amazon | Flipkart) to write a guest post on this subject which developer can read and appreciate. Let us see Srini’s thoughts in his own words. Each of us who works in the technology industry carries an especially heavy burden nowadays. For, fate has placed in our hands an awesome power to shape our society and its consciousness. For that reason, we must pay more and more attention to issues of professionalism, social responsibility and ethics. Equally importantly, the responsibility lies in our hands to ensure that we view our work and career as an opportunity to enlighten and lift ourselves up. Story: A Prisoner, 20 years and a Wheel Many years ago, I heard this story from a professor when I was a student at Carnegie Mellon. A man was sentenced to 20 years in prison. During his time in prison, he was asked to turn a wheel every day. So, every day he turned the wheel. At times, when he was tired or puzzled and stopped turning the wheel, he would be flogged with a whip. The man did not know anything about the wheel other than that it was placed outside his jail somewhere. He wondered if the wheel crushed corn or if it ground wheat or something similar. He wondered if turning the wheel was useful to anyone. At the end of his jail term, he rushed out to see what the wheel was doing. To his disappointment, he found that the wheel was not connected to anything. All these years, he had been toiling for nothing. He gave a loud, frustrated shout and dropped dead. How many of us are turning wheels wondering what it is connected to? How many of us have unstated, uncaring attitudes towards our careers? How many of us view work as drudgery, as no more than a way to earn that next paycheck? How many of us have wondered about the spiritually uplifting aspect of work? Can a workforce that views work as merely a chore, be ethical? Can it produce truly life enhancing technology? Can it make positive contributions to the quality of life of a society? I think not. Thanks to Pinal and you, his readers, for giving me this opportunity to share my thoughts in a series of guest posts. I’d like to present a few ways over the next few weeks, in which we can tap into the liberating potential of work and make our lives better in the process. Now, please allow me to tell you another version of the story that the good professor shared with us in the classroom that day. Story: A Prisoner, 20 years, a Wheel and the LIFE A man was sentenced to 20 years in prison. During his time in prison, he was asked to turn a wheel every day. So, every day he turned the wheel. At first, his whole body and mind rebelled against his predicament. So, his limbs grew weary and his mind became numb and confused. And then, his self-awareness began to grow. He began to wonder how he came to be in the prison in the first place. He looked around and saw all his fellow prisoners also turning the wheel. His wife, his parents, his friends and his children – they were all in the prison too, and turning their own wheels! He began to wonder how this came about. As he wondered more and more, he began to focus less on his physical drudgery and boredom. And he began to clearly see his inner spirit which guided him in ways that allowed him to see the world with a universal view. His inner spirit guided him towards the source of eternal wisdom and happiness. He began to see the source of happiness in everything around him – his prison bound relationships, even his jailers and in his wheel. He became a source of light to those around him. His wheel jokes and humor infected them with joy and happiness. Finally, the day came for his release from jail. He walked calmly outside the jail and laughed aloud when he saw that the wheel was not connected to anything. He knelt down, kissed it and thanked it for the wisdom it taught him. Life is the prison. The wheel is your work. Both are sacred. Both have enormous powers to teach us wisdom and bring us happiness. Whether we allow them to do so, is a choice we have to make. Over the next few weeks, I hope to share with you a few lessons that I have learnt at the wheel in my two decades of my career (prison). Thank you for reading, and do let me know what you think. Reference: Srini Chandra (3 Lives, in search of bliss), Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, T SQL, Technology

    Read the article

  • iPack -The iOS Application Packager

    - by user13277780
    iOS applications are distributed in .ipa archive files. These files are regular zip files which contain application resources and executable-s. To protect them from unauthorized modifications and to provide identification of their sources, the content of the archives is signed. The signature is included in the application executable of an.ipa archive and protects the executable file itself and the associated resource files. Apple provides native Mac OS tools for signing iOS executable-s (which are actually generic Mach-O code signing tools), but these tools are not generally available on other platforms. To provide a multi-platform development environment for JavaFX based iOS applications, we ported iOS signing and packaging to Java and created a dedicated ipack tool for it. The iPack tool can be used as a last step of creating .ipa package on various operating systems. Prototype has been tested by creating a final distributable for JavaFX application that runs on iPad, all done on Windows 7. Source Code The source code of iPac tool is in OpenJFX project repository. You can find it in: <openjfx root>/rt/tools/ios/Maven/ipack To build the iPack tool use: rt/tools/ios/Maven/ipack$ mvn package After building, you can run the tool: java -jar <path to ipack.jar> <arguments>  Signing keystore The tool uses a java key store to read the signing certificate and the associated private key. To prepare such keystore users can use keytool from JDK. One possible scenario is to import an existing private key and the certificate from a key store used on Mac OS: To list the content of an existing key store and identify the source alias: keytool -list -keystore <src keystore>.p12 -storetype pkcs12 -storepass <src keystore password> To create Java key store and import the private key with its certificate to the keys store: keytool -importkeystore \ -destkeystore <dst keystore> -deststorepass <dst keystore password> \ -srckeystore <src keystore>.p12 -srcstorepass <src keystore password> -srcstoretype pkcs12 \ -srcalias <src alias> -destalias <dst alias> -destkeypass <dst key password> Another scenario would be to generate a private / public key pair directly in a Java key store and create a certificate request from it. After sending the request to Apple one can then import the certificate response back to the Java key store and complete the signing certificate entry. In both scenarios the resulting alias in the Java key store will contain only a single (leaf) certificate. This can be verified with the following command: keytool -list -v -keystore <ipack keystore> -storepass <keystore password> When looking at the Certificate chain length entry, the number next to it is 1. When an executable file is signed on Mac OS, the resulting signature (in CMS format) includes the whole certificate chain up to the Apple Root CA. The ipack tool includes only the chain which is stored under the alias specified on the command line. So to have the whole chain in the signature we need to replace the single certificate entry under the alias with the corresponding full certificate chain. To do that we need first to create the chain in a separate file. It is easy to create such chain when working with certificates in Base-64 encoded PEM format. A certificate chain can be created by concatenating PEM certificates, which should form the chain, into a single file. For iOS signing we need the following certificates in our chain: Apple Root CA Apple Worldwide Developer Relations CA Our signing leaf certificate To convert a certificate from the binary DER format (.der, .cer) to PEM format: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert -file <certificate>.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert -rfc -file <certificate>.pem To export the signing certificate into PEM format: keytool -exportcert -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -rfc -file SigningCert.pem After constructing a chain from AppleIncRootCertificate.pem, AppleWWDRCA.pem andSigningCert.pem, it can be imported back into the keystore with: keytool -importcert -noprompt -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -keypass <key password> -file SigningCertChain.pem To summarize, the following example shows the full certificate chain replacement process: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert1 -file AppleIncRootCertificate.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert1 -rfc -file AppleIncRootCertificate.pem keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert2 -file AppleWWDRCA.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert2 -rfc -file AppleWWDRCA.pem keytool -exportcert -keystore ipack.ks -storepass keystorepwd -alias mycert -rfc -file SigningCert.pem cat SigningCert.pem AppleWWDRCA.pem AppleIncRootCertificate.pem >SigningCertChain.pem keytool -importcert -noprompt -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -file SigningCertChain.pem keytool -list -v -keystore ipack.ks -storepass keystorepwd Usage When the ipack tool is started with no arguments it prints the following usage information: -appname MyApplication -appid com.myorg.MyApplication     Usage: ipack <archive> <signing opts> <application opts> [ <application opts> ... ] Signing options: -keystore <keystore> keystore to use for signing -storepass <password> keystore password -alias <alias> alias for the signing certificate chain and the associated private key -keypass <password> password for the private key Application options: -basedir <directory> base directory from which to derive relative paths -appdir <directory> directory with the application executable and resources -appname <file> name of the application executable -appid <id> application identifier Example: ipack MyApplication.ipa -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -basedir mysources/MyApplication/dist -appdir Payload/MyApplication.app -appname MyApplication -appid com.myorg.MyApplication    

    Read the article

  • Introducing sp_ssiscatalog (v1.0.0.0)

    - by jamiet
    Regular readers of my blog may know that over the last year I have made available a suite of SQL Server Reporting Services (SSRS) reports that provide visualisations of the data in the SQL Server Integration Services (SSIS) 2012 Catalog. Those reports are available at http://ssisreportingpack.codeplex.com. As I have built these reports and used them myself on a real life project a couple of things have dawned on me: As soon as your SSIS Catalog gets a significant amount of data in it the performance of the reports degrades rapidly. This is hampered by the fact that there are limitations as to the SQL statements that I can embed within a SSRS report. SSIS professionals are data guys at heart and those types of people feel more comfortable in a query environment rather than having to go through the rigmarole of standing up a reporting server (well, I know I do anyway) Hence I have decided to take a different tack with the reporting pack. Taking my lead from Adam Machanic’s sp_whoisactive and Brent Ozar’s sp_blitz I have produced sp_ssiscatalog, a stored procedure that makes it easy to get at the crucial data in the SSIS Catalog. I will spend the rest of this blog explaining exactly what sp_ssiscatalog does and how to use it but if you would rather just download the bits yourself and start to play you can download v1.0.0.0 from DB v1.0.0.0. Usage Scenarios Most Recent Execution I find that the most frequent information that one needs to get from the SSIS Catalog is information pertaining to the most recent execution. Hence if you execute sp_ssiscatalog with no parameters, that is exactly what you will get. EXEC [dbo].[sp_ssiscatalog] This will return up to 5 resultsets: EXECUTION - Summary information about the execution including status, start time & end time EVENTS - All events that occurred during the execution OnError,OnTaskFailed - All events where event_name is either OnError or OnTaskFailed OnWarning - All events where event_name is OnWarning EXECUTABLE_STATS - Duration and execution result of every executable in the execution All 5 resultsets will be displayed if there is any data satisfying that resultset. In other words, if there are no (for example) OnWarning events then the OnWarning resultset will not be displayed. The display of these 5 resultsets can be toggled respectively by these 5 optional parameters (all of which are of type BIT): @exec_execution @exec_events @exec_errors @exec_warnings @exec_executable_stats Any Execution As just explained the default behaviour is to supply data for the most recent execution. If you wish to specify which execution the data should return data for simply supply the execution_id as a parameter: EXEC [dbo].[sp_ssiscatalog] 6 All Executions sp_ssiscatalog can also return information about all executions: EXEC [dbo].[sp_ssiscatalog] @operation_type='execs' The most recent execution will appear at the top. sp_ssiscatalog provides a number of parameters that enable you to filter the resultset: @execs_folder_name @execs_project_name @execs_package_name @execs_executed_as_name @execs_status_desc Some typical usages might be: //Return all failed executions EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_status_desc='failed' //Return all executions for a specified folder EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_folder_name='My folder' //Return all executions of a specified package in a specified project EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_project_name='My project', @execs_package_name='Pkg.dtsx' Installing sp_ssicatalog Under the covers sp_ssiscatalog actually calls many other stored procedures and functions hence creating it on your server is not simply a case of running a CREATE PROCEDURE script. I maintain the code in an SQL Server Data Tools (SSDT) database project which means that you have two ways of obtaining it. Download the source code You can download the latest (at the time of writing) source code from http://ssisreportingpack.codeplex.com/SourceControl/changeset/view/70192. Hit the download button to download all the source code in a zip file. The contents of that zip file will include an SSDT database project which you can open up in SSDT and publish just like any other SSDT database project. You can publish to a new database or any existing database, even [SSISDB] if you prefer. Download a dacpac Maintaining the code in an SSDT database project means that it can all get packaged up into a dacpac that you can then publish to your SQL Server. That dacpac is available from DB v1.0.0.0: Ordinarily a dacpac can be deployed to a SQL Server from SSMS using the Deploy Dacpac wizard however in this case there is a limitation. Due to sp_ssiscatalog referring to objects in the SSIS Catalog (which it has to do of course) the dacpac contains a SqlCmd variable to store the name of the database that underpins the SSIS Catalog; unfortunately the Deploy Dacpac wizard in SSMS has a rather gaping limitation in that it cannot deploy dacpacs containing SqlCmd variables. Hence, we can use the command-line tool, sqlpackage.exe, instead. Don’t worry if reverting to the command-line sounds a little daunting, I assure you it is not. Simply open a Visual Studio command-prompt and cd to the folder containing the downloaded dacpac: Type: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /action:Publish /TargetDatabaseName:SsisReportingPack /SourceFile:SSISReportingPack.dacpac /Variables:SSISDB=SSISDB /TargetServerName:(local) or the shortened form: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /a:Publish /tdn:SsisReportingPack /sf:SSISReportingPack.dacpac /v:SSISDB=SSISDB /tsn:(local) remembering to set your server name appropriately (here mine is set to “(local)” ). If everything works successfully you will see this: And you’re done! You’ll have a new database called [SsisReportingPack] which contains sp_ssiscatalog:   Good luck with sp_ssiscatalog. I have been using it extensively on my own projects recently and it has proved to be very useful indeed. Rest-assured however, I will be adding many new capabilities in the future. Feedback is welcome. @Jamiet

    Read the article

  • Goto for the Java Programming Language

    - by darcy
    Work on JDK 8 is well-underway, but we thought this late-breaking JEP for another language change for the platform couldn't wait another day before being published. Title: Goto for the Java Programming Language Author: Joseph D. Darcy Organization: Oracle. Created: 2012/04/01 Type: Feature State: Funded Exposure: Open Component: core/lang Scope: SE JSR: 901 MR Discussion: compiler dash dev at openjdk dot java dot net Start: 2012/Q2 Effort: XS Duration: S Template: 1.0 Reviewed-by: Duke Endorsed-by: Edsger Dijkstra Funded-by: Blue Sun Corporation Summary Provide the benefits of the time-testing goto control structure to Java programs. The Java language has a history of adding new control structures over time, the assert statement in 1.4, the enhanced for-loop in 1.5,and try-with-resources in 7. Having support for goto is long-overdue and simple to implement since the JVM already has goto instructions. Success Metrics The goto statement will allow inefficient and verbose recursive algorithms and explicit loops to be replaced with more compact code. The effort will be a success if at least twenty five percent of the JDK's explicit loops are replaced with goto's. Coordination with IDE vendors is expected to help facilitate this goal. Motivation The goto construct offers numerous benefits to the Java platform, from increased expressiveness, to more compact code, to providing new programming paradigms to appeal to a broader demographic. In JDK 8, there is a renewed focus on using the Java platform on embedded devices with more modest resources than desktop or server environments. In such contexts, static and dynamic memory footprint is a concern. One significant component of footprint is the code attribute of class files and certain classes of important algorithms can be expressed more compactly using goto than using other constructs, saving footprint. For example, to implement state machines recursively, some parties have asked for the JVM to support tail calls, that is, to perform a complex transformation with security implications to turn a method call into a goto. Such complicated machinery should not be assumed for an embedded context. A better solution is just to expose to the programmer the desired functionality, goto. The web has familiarized users with a model of traversing links among different HTML pages in a free-form fashion with some state being maintained on the side, such as login credentials, to effect behavior. This is exactly the programming model of goto and code. While in the past this has been derided as leading to "spaghetti code," spaghetti is a tasty and nutritious meal for programmers, unlike quiche. The invokedynamic instruction added by JSR 292 exposes the JVM's linkage operation to programmers. This is a low-level operation that can be leveraged by sophisticated programmers. Likewise, goto is a also a low-level operation that should not be hidden from programmers who can use more efficient idioms. Some may object that goto was consciously excluded from the original design of Java as one of the removed feature from C and C++. However, the designers of the Java programming languages have revisited these removals before. The enum construct was also left out only to be added in JDK 5 and multiple inheritance was left out, only to be added back by the virtual extension method methods of Project Lambda. As a living language, the needs of the growing Java community today should be used to judge what features are needed in the platform tomorrow; the language should not be forever bound by the decisions of the past. Description From its initial version, the JVM has had two instructions for unconditional transfer of control within a method, goto (0xa7) and goto_w (0xc8). The goto_w instruction is used for larger jumps. All versions of the Java language have supported labeled statements; however, only the break and continue statements were able to specify a particular label as a target with the onerous restriction that the label must be lexically enclosing. The grammar addition for the goto statement is: GotoStatement: goto Identifier ; The new goto statement similar to break except that the target label can be anywhere inside the method and the identifier is mandatory. The compiler simply translates the goto statement into one of the JVM goto instructions targeting the right offset in the method. Therefore, adding the goto statement to the platform is only a small effort since existing compiler and JVM functionality is reused. Other language changes to support goto include obvious updates to definite assignment analysis, reachability analysis, and exception analysis. Possible future extensions include a computed goto as found in gcc, which would replace the identifier in the goto statement with an expression having the type of a label. Testing Since goto will be implemented using largely existing facilities, only light levels of testing are needed. Impact Compatibility: Since goto is already a keyword, there are no source compatibility implications. Performance/scalability: Performance will improve with more compact code. JVMs already need to handle irreducible flow graphs since goto is a VM instruction.

    Read the article

  • Goals for 2010 Retrospective

    - by Brian Jackett
    As we approach the end of 2010 I’d like to take a  few minutes to reflect back on this past year and revisit the goals that I set for myself at the beginning of the year (click here to see those goals).  I feel it is important to track your goals not only to see if you accomplished them but also to see what new directions in life you pursued.  Once we enter into 2011 I’ll follow up with a new post on goals for the new year. Professional Blog – This year I intended to write at least 2 posts a month.  Looking back I far surpassed that goal by writing 47 posts (this one being my 48th).  As with many things in life, quantity does not mean quality.  A good example is a number of my posts announcing upcoming speaking engagements and providing links to presentation slides and scripts.  That aside, I like to at least keep content relatively fresh on this blog  which I was able to accomplish.  At the same time I’ve gotten much more comfortable in my blogging style and it has become much easier to write. Speaking – I didn’t define a clear goal for speaking engagements, but had a rough idea of wanting to speak at 2-3 events.  Once again I far exceeded that number by speaking at 10 separate events and delivering 12+ presentations.  I’m very thankful for all of the opportunities that I was given and all of the wonderful people I have met as a result. Volunteering – This year I intended to help out with the COSPUG (now Buckeye SPUG) steering committee and Stir Trek conference.  I fulfilled both goals and as well as taking on lead organizer duties for the first ever SharePoint Saturday Columbus.  Each of these events and groups turned out to be successful and I was glad to be a part of them all.  I look forward to continuing to volunteer with each next year in some capacity. Android Development – My goal for getting into Android development was a late addition, but one I didn’t necessarily fulfill.  I spent a couple nights downloading the tools, configuring my environment, and going through some “simple” tutorials.  I say “simple” because in my opinion the tutorials were not laid out very well, took a long time to get running properly, and confused me more than helped.  After about a week I was frustrated with the process and didn’t think it was a good use of my time.  On a side note, I’ve dabbled in Windows Phone 7 development over the past few months and have been very excited by how easy and intuitive it was to get started and develop some proof of concepts. Personal Getting in Shape – I had intended to play on recreational sports leagues and work out on a semi-regular basis.  For the most part I fulfilled this goal by playing on various softball and volleyball leagues as well as using the gym.  At the same time I had some major setbacks.  In the spring I badly sprained my ankle and got hit in the knee with a softball which kept me inactive for almost 2 months.  More recently I broke my knuckle (click here to read about it) which I am still recovering from. Volunteering – On the volunteering front I kept my commitments at my parish’s high school youth group.  As for other volunteering opportunities I got involved with a great organization called Columbus Gives Back (website).  I’ve volunteered with them a few times and really enjoy their goal to provide opportunities to people with busy schedules.  They  offer a variety of events typically after work hours and spread out around Columbus with no set commitments on time you need to put in.  If you have the time or motivation I highly recommend them. House/Condo – I had been thinking of buying a house or condo this past summer, but decided to extend my apartment lease for another year instead.  I have begun the search for a place in the past few weeks and am excited begin the process of owning a home. Conclusion     This year I was able to set and achieve many of my goals.  For next year I’ll try to put more specific numbers to all of my goals.  If any of you readers set goals for 2011 feel free to send me a link as I’d love to see what you are aiming to accomplish.  Have a great end of 2010 and best wishes for the start of 2011!       -Frog Out

    Read the article

  • SSIS: Building SQL databases on-the-fly using concatenated SQL scripts

    - by DrJohn
    Over the years I have developed many techniques which help automate the whole SQL Server build process. In my current process, where I need to build entire OLAP data marts on-the-fly, I make regular use of a simple but very effective mechanism to concatenate all the SQL Scripts together from my SSMS (SQL Server Management Studio) projects. This proves invaluable because in two clicks I can redeploy an entire SQL Server database with all tables, views, stored procedures etc. Indeed, I can also use the concatenated SQL scripts with SSIS to build SQL Server databases on-the-fly. You may be surprised to learn that I often redeploy the database several times per day, or even several times per hour, during the development process. This is because the deployment errors are logged and you can quickly see where SQL Scripts have object dependency errors. For example, after changing a table structure you may have forgotten to change any related views. The deployment log immediately points out all the objects which failed to build so you can fix and redeploy the database very quickly. The alternative approach (i.e. doing changes in the database directly using the SSMS UI) would require you to check all dependent objects before making changes. The chances are that you will miss something and wonder why your app returns the wrong data – a common problem caused by changing a table without re-creating dependent views. Using SQL Projects in SSMS A great many developers fail to make use of SQL Projects in SSMS (SQL Server Management Studio). To me they are invaluable way of organizing your SQL Scripts. The screenshot below shows a typical SSMS solution made up of several projects – one project for tables, another for views etc. The key point is that the projects naturally fall into the right order in file system because of the project name. The number in the folder or file name ensures that the projects the SQL scripts are concatenated together in the order that they need to be executed. Hence the script filenames start with 100, 110 etc. Concatenating SQL Scripts To concatenate the SQL Scripts together into one file, I use notepad.exe to create a simple batch file (see example screenshot) which uses the TYPE command to write the content of the SQL Script files into a combined file. As the SQL Scripts are in several folders, I simply use several TYPE command multiple times and append the output together. If you are unfamiliar with batch files, you may not know that the angled bracket (>) means write output of the program into a file. Two angled brackets (>>) means append output of this program into a file. So the command-line DIR > filelist.txt would write the content of the DIR command into a file called filelist.txt. In the example shown above, the concatenated file is called SB_DDS.sql If, like me you place the concatenated file under source code control, then the source code control system will change the file's attribute to "read-only" which in turn would cause the TYPE command to fail. The ATTRIB command can be used to remove the read-only flag. Using SQLCmd to execute the concatenated file Now that the SQL Scripts are all in one big file, we can execute the script against a database using SQLCmd using another batch file as shown below: SQLCmd has numerous options, but the script shown above simply executes the SS_DDS.sql file against the SB_DDS_DB database on the local machine and logs the errors to a file called SB_DDS.log. So after executing the batch file you can simply check the error log to see if your database built without a hitch. If you have errors, then simply fix the source files, re-create the concatenated file and re-run the SQLCmd to rebuild the database. This two click operation allows you to quickly identify and fix errors in your entire database definition.Using SSIS to execute the concatenated file To execute the concatenated SQL script using SSIS, you simply drop an Execute SQL task into your package and set the database connection as normal and then select File Connection as the SQLSourceType (as shown below). Create a file connection to your concatenated SQL script and you are ready to go.   Tips and TricksAdd a new-line at end of every fileThe most common problem encountered with this approach is that the GO statement on the last line of one file is placed on the same line as the comment at the top of the next file by the TYPE command. The easy fix to this is to ensure all your files have a new-line at the end.Remove all USE database statementsThe SQLCmd identifies which database the script should be run against.  So you should remove all USE database commands from your scripts - otherwise you may get unintentional side effects!!Do the Create Database separatelyIf you are using SSIS to create the database as well as create the objects and populate the database, then invoke the CREATE DATABASE command against the master database using a separate package before calling the package that executes the concatenated SQL script.    

    Read the article

  • Problems with inheritance query view and one to many association in entity framework 4

    - by Kazys
    Hi, I have situation in with I stucked and don't know way out. The problem is in my bigger model, but I have made small example which shows the same problem. I have 4 tables. I called them SuperParent, NamedParent, TypedParent and ParentType. NamedParent and TypedParent derives from superParent. TypedParent has one to many association with ParentType. I describe mapping for entities using queryView. The problem is then I want to get TypedParents and Include ParentType I get the following exception: An error occurred while preparing the command definition. See the inner exception for details. --- System.ArgumentException: The ResultType of the specified expression is not compatible with the required type. The expression ResultType is 'Transient.reference[PasibandymaiModel.SuperParent]' but the required type is 'Transient.reference[PasibandymaiModel.TypedParent]'. Parameter name: arguments[1] To get TypedParents I use following code: context.SuperParent.OfType().Include("ParentType"); my edmx file: <edmx:Edmx Version="2.0" xmlns:edmx="http://schemas.microsoft.com/ado/2008/10/edmx"> <!-- EF Runtime content --> <edmx:Runtime> <!-- SSDL content --> <edmx:StorageModels> <Schema Namespace="PasibandymaiModel.Store" Alias="Self" Provider="System.Data.SqlClient" ProviderManifestToken="2005" xmlns:store="http://schemas.microsoft.com/ado/2007/12/edm/EntityStoreSchemaGenerator" xmlns="http://schemas.microsoft.com/ado/2009/02/edm/ssdl"> <EntityContainer Name="PasibandymaiModelStoreContainer"> <EntitySet Name="NamedParent" EntityType="PasibandymaiModel.Store.NamedParent" store:Type="Tables" Schema="dbo" /> <EntitySet Name="ParentType" EntityType="PasibandymaiModel.Store.ParentType" store:Type="Tables" Schema="dbo" /> <EntitySet Name="SuperParent" EntityType="PasibandymaiModel.Store.SuperParent" store:Type="Tables" Schema="dbo" /> <EntitySet Name="TypedParent" EntityType="PasibandymaiModel.Store.TypedParent" store:Type="Tables" Schema="dbo" /> <AssociationSet Name="fk_NamedParent_SuperParent" Association="PasibandymaiModel.Store.fk_NamedParent_SuperParent"> <End Role="SuperParent" EntitySet="SuperParent" /> <End Role="NamedParent" EntitySet="NamedParent" /> </AssociationSet> <AssociationSet Name="fk_TypedParent_ParentType" Association="PasibandymaiModel.Store.fk_TypedParent_ParentType"> <End Role="ParentType" EntitySet="ParentType" /> <End Role="TypedParent" EntitySet="TypedParent" /> </AssociationSet> <AssociationSet Name="fk_TypedParent_SuperParent" Association="PasibandymaiModel.Store.fk_TypedParent_SuperParent"> <End Role="SuperParent" EntitySet="SuperParent" /> <End Role="TypedParent" EntitySet="TypedParent" /> </AssociationSet> </EntityContainer> <EntityType Name="NamedParent"> <Key> <PropertyRef Name="ParentId" /> </Key> <Property Name="ParentId" Type="int" Nullable="false" /> <Property Name="Name" Type="nvarchar" Nullable="false" MaxLength="100" /> </EntityType> <EntityType Name="ParentType"> <Key> <PropertyRef Name="ParentTypeId" /> </Key> <Property Name="ParentTypeId" Type="int" Nullable="false" StoreGeneratedPattern="Identity" /> <Property Name="Name" Type="nvarchar" MaxLength="100" /> </EntityType> <EntityType Name="SuperParent"> <Key> <PropertyRef Name="ParentId" /> </Key> <Property Name="ParentId" Type="int" Nullable="false" StoreGeneratedPattern="Identity" /> <Property Name="SomeAttribute" Type="nvarchar" Nullable="false" MaxLength="100" /> </EntityType> <EntityType Name="TypedParent"> <Key> <PropertyRef Name="ParentId" /> </Key> <Property Name="ParentId" Type="int" Nullable="false" /> <Property Name="ParentTypeId" Type="int" Nullable="false"/> </EntityType> <Association Name="fk_NamedParent_SuperParent"> <End Role="SuperParent" Type="PasibandymaiModel.Store.SuperParent" Multiplicity="1" /> <End Role="NamedParent" Type="PasibandymaiModel.Store.NamedParent" Multiplicity="0..1" /> <ReferentialConstraint> <Principal Role="SuperParent"> <PropertyRef Name="ParentId" /> </Principal> <Dependent Role="NamedParent"> <PropertyRef Name="ParentId" /> </Dependent> </ReferentialConstraint> </Association> <Association Name="fk_TypedParent_ParentType"> <End Role="ParentType" Type="PasibandymaiModel.Store.ParentType" Multiplicity="1" /> <End Role="TypedParent" Type="PasibandymaiModel.Store.TypedParent" Multiplicity="*" /> <ReferentialConstraint> <Principal Role="ParentType"> <PropertyRef Name="ParentTypeId" /> </Principal> <Dependent Role="TypedParent"> <PropertyRef Name="ParentTypeId" /> </Dependent> </ReferentialConstraint> </Association> <Association Name="fk_TypedParent_SuperParent"> <End Role="SuperParent" Type="PasibandymaiModel.Store.SuperParent" Multiplicity="1" /> <End Role="TypedParent" Type="PasibandymaiModel.Store.TypedParent" Multiplicity="0..1" /> <ReferentialConstraint> <Principal Role="SuperParent"> <PropertyRef Name="ParentId" /> </Principal> <Dependent Role="TypedParent"> <PropertyRef Name="ParentId" /> </Dependent> </ReferentialConstraint> </Association> <Function Name="ChildDelete" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ChildId" Type="int" Mode="In" /> </Function> <Function Name="ChildInsert" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="Name" Type="nvarchar" Mode="In" /> <Parameter Name="ParentId" Type="int" Mode="In" /> </Function> <Function Name="ChildUpdate" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ChildId" Type="int" Mode="In" /> <Parameter Name="ParentId" Type="int" Mode="In" /> <Parameter Name="Name" Type="nvarchar" Mode="In" /> </Function> <Function Name="NamedParentDelete" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentId" Type="int" Mode="In" /> </Function> <Function Name="NamedParentInsert" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="Name" Type="nvarchar" Mode="In" /> <Parameter Name="SomeAttribute" Type="nvarchar" Mode="In" /> </Function> <Function Name="NamedParentUpdate" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentId" Type="int" Mode="In" /> <Parameter Name="SomeAttribute" Type="nvarchar" Mode="In" /> <Parameter Name="Name" Type="nvarchar" Mode="In" /> </Function> <Function Name="ParentTypeDelete" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentTypeId" Type="int" Mode="In" /> </Function> <Function Name="ParentTypeInsert" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="Name" Type="nvarchar" Mode="In" /> </Function> <Function Name="ParentTypeUpdate" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentTypeId" Type="int" Mode="In" /> <Parameter Name="Name" Type="nvarchar" Mode="In" /> </Function> <Function Name="TypedParentDelete" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentId" Type="int" Mode="In" /> </Function> <Function Name="TypedParentInsert" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentTypeId" Type="int" Mode="In" /> <Parameter Name="SomeAttribute" Type="nvarchar" Mode="In" /> </Function> <Function Name="TypedParentUpdate" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentId" Type="int" Mode="In" /> <Parameter Name="SomeAttribute" Type="nvarchar" Mode="In" /> <Parameter Name="ParentTypeId" Type="int" Mode="In" /> </Function> </Schema> </edmx:StorageModels> <!-- CSDL content --> <edmx:ConceptualModels> <Schema Namespace="PasibandymaiModel" Alias="Self" xmlns:annotation="http://schemas.microsoft.com/ado/2009/02/edm/annotation" xmlns="http://schemas.microsoft.com/ado/2008/09/edm"> <EntityContainer Name="PasibandymaiEntities" annotation:LazyLoadingEnabled="true"> <EntitySet Name="ParentType" EntityType="PasibandymaiModel.ParentType" /> <EntitySet Name="SuperParent" EntityType="PasibandymaiModel.SuperParent" /> <AssociationSet Name="ParentTypeTypedParent" Association="PasibandymaiModel.ParentTypeTypedParent"> <End Role="ParentType" EntitySet="ParentType" /> <End Role="TypedParent" EntitySet="SuperParent" /> </AssociationSet> </EntityContainer> <EntityType Name="NamedParent" BaseType="PasibandymaiModel.SuperParent"> <Property Type="String" Name="Name" Nullable="false" MaxLength="100" FixedLength="false" Unicode="true" /> </EntityType> <EntityType Name="ParentType"> <Key> <PropertyRef Name="ParentTypeId" /> </Key> <Property Type="Int32" Name="ParentTypeId" Nullable="false" annotation:StoreGeneratedPattern="Identity" /> <Property Type="String" Name="Name" MaxLength="100" FixedLength="false" Unicode="true" /> <NavigationProperty Name="TypedParent" Relationship="PasibandymaiModel.ParentTypeTypedParent" FromRole="ParentType" ToRole="TypedParent" /> </EntityType> <EntityType Name="SuperParent" Abstract="true"> <Key> <PropertyRef Name="ParentId" /> </Key> <Property Type="Int32" Name="ParentId" Nullable="false" annotation:StoreGeneratedPattern="Identity" /> <Property Type="String" Name="SomeAttribute" Nullable="false" MaxLength="100" FixedLength="false" Unicode="true" /> </EntityType> <EntityType Name="TypedParent" BaseType="PasibandymaiModel.SuperParent"> <NavigationProperty Name="ParentType" Relationship="PasibandymaiModel.ParentTypeTypedParent" FromRole="TypedParent" ToRole="ParentType" /> <Property Type="Int32" Name="ParentTypeId" Nullable="false" /> </EntityType> <Association Name="ParentTypeTypedParent"> <End Type="PasibandymaiModel.ParentType" Role="ParentType" Multiplicity="1" /> <End Type="PasibandymaiModel.TypedParent" Role="TypedParent" Multiplicity="*" /> <ReferentialConstraint> <Principal Role="ParentType"> <PropertyRef Name="ParentTypeId" /> </Principal> <Dependent Role="TypedParent"> <PropertyRef Name="ParentTypeId" /> </Dependent> </ReferentialConstraint> </Association> </Schema> </edmx:ConceptualModels> <!-- C-S mapping content --> <edmx:Mappings> <Mapping Space="C-S" xmlns="http://schemas.microsoft.com/ado/2008/09/mapping/cs"> <EntityContainerMapping StorageEntityContainer="PasibandymaiModelStoreContainer" CdmEntityContainer="PasibandymaiEntities"> <EntitySetMapping Name="ParentType"> <QueryView> SELECT VALUE PasibandymaiModel.ParentType(tp.ParentTypeId, tp.Name) FROM PasibandymaiModelStoreContainer.ParentType AS tp </QueryView> </EntitySetMapping> <EntitySetMapping Name="SuperParent"> <QueryView> SELECT VALUE CASE WHEN (np.ParentId IS NOT NULL) THEN PasibandymaiModel.NamedParent(sp.ParentId, sp.SomeAttribute, np.Name) WHEN (tp.ParentId IS NOT NULL) THEN PasibandymaiModel.TypedParent(sp.ParentId, sp.SomeAttribute, tp.ParentTypeId) END FROM PasibandymaiModelStoreContainer.SuperParent AS sp LEFT JOIN PasibandymaiModelStoreContainer.NamedParent AS np ON sp.ParentId = np.ParentId LEFT JOIN PasibandymaiModelStoreContainer.TypedParent AS tp ON sp.ParentId = tp.ParentId </QueryView> <QueryView TypeName="PasibandymaiModel.TypedParent"> SELECT VALUE PasibandymaiModel.TypedParent(sp.ParentId, sp.SomeAttribute, tp.ParentTypeId) FROM PasibandymaiModelStoreContainer.SuperParent AS sp INNER JOIN PasibandymaiModelStoreContainer.TypedParent AS tp ON sp.ParentId = tp.ParentId </QueryView> <QueryView TypeName="PasibandymaiModel.NamedParent"> SELECT VALUE PasibandymaiModel.NamedParent(sp.ParentId, sp.SomeAttribute, np.Name) FROM PasibandymaiModelStoreContainer.SuperParent AS sp INNER JOIN PasibandymaiModelStoreContainer.NamedParent AS np ON sp.ParentId = np.ParentId </QueryView> </EntitySetMapping> </EntityContainerMapping> </Mapping> </edmx:Mappings> </edmx:Runtime> </edmx:Edmx> I have tried using AssociationSetMapping instead of using Association with ReferentialConstraint. But then couldn't insert related entities at once, becouse entity framework didn't provided entity key of inserted entities for related entities. Thanks for any idea

    Read the article

  • C#/.NET Little Wonders: Static Char Methods

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Often times in our code we deal with the bigger classes and types in the BCL, and occasionally forgot that there are some nice methods on the primitive types as well.  Today we will discuss some of the handy static methods that exist on the char (the C# alias of System.Char) type. The Background I was examining a piece of code this week where I saw the following: 1: // need to get the 5th (offset 4) character in upper case 2: var type = symbol.Substring(4, 1).ToUpper(); 3:  4: // test to see if the type is P 5: if (type == "P") 6: { 7: // ... do something with P type... 8: } Is there really any error in this code?  No, but it still struck me wrong because it is allocating two very short-lived throw-away strings, just to store and manipulate a single char: The call to Substring() generates a new string of length 1 The call to ToUpper() generates a new upper-case version of the string from Step 1. In my mind this is similar to using ToUpper() to do a case-insensitive compare: it isn’t wrong, it’s just much heavier than it needs to be (for more info on case-insensitive compares, see #2 in 5 More Little Wonders). One of my favorite books is the C++ Coding Standards: 101 Rules, Guidelines, and Best Practices by Sutter and Alexandrescu.  True, it’s about C++ standards, but there’s also some great general programming advice in there, including two rules I love:         8. Don’t Optimize Prematurely         9. Don’t Pessimize Prematurely We all know what #8 means: don’t optimize when there is no immediate need, especially at the expense of readability and maintainability.  I firmly believe this and in the axiom: it’s easier to make correct code fast than to make fast code correct.  Optimizing code to the point that it becomes difficult to maintain often gains little and often gives you little bang for the buck. But what about #9?  Well, for that they state: “All other things being equal, notably code complexity and readability, certain efficient design patterns and coding idioms should just flow naturally from your fingertips and are no harder to write then the pessimized alternatives. This is not premature optimization; it is avoiding gratuitous pessimization.” Or, if I may paraphrase: “where it doesn’t increase the code complexity and readability, prefer the more efficient option”. The example code above was one of those times I feel where we are violating a tacit C# coding idiom: avoid creating unnecessary temporary strings.  The code creates temporary strings to hold one char, which is just unnecessary.  I think the original coder thought he had to do this because ToUpper() is an instance method on string but not on char.  What he didn’t know, however, is that ToUpper() does exist on char, it’s just a static method instead (though you could write an extension method to make it look instance-ish). This leads me (in a long-winded way) to my Little Wonders for the day… Static Methods of System.Char So let’s look at some of these handy, and often overlooked, static methods on the char type: IsDigit(), IsLetter(), IsLetterOrDigit(), IsPunctuation(), IsWhiteSpace() Methods to tell you whether a char (or position in a string) belongs to a category of characters. IsLower(), IsUpper() Methods that check if a char (or position in a string) is lower or upper case ToLower(), ToUpper() Methods that convert a single char to the lower or upper equivalent. For example, if you wanted to see if a string contained any lower case characters, you could do the following: 1: if (symbol.Any(c => char.IsLower(c))) 2: { 3: // ... 4: } Which, incidentally, we could use a method group to shorten the expression to: 1: if (symbol.Any(char.IsLower)) 2: { 3: // ... 4: } Or, if you wanted to verify that all of the characters in a string are digits: 1: if (symbol.All(char.IsDigit)) 2: { 3: // ... 4: } Also, for the IsXxx() methods, there are overloads that take either a char, or a string and an index, this means that these two calls are logically identical: 1: // check given a character 2: if (char.IsUpper(symbol[0])) { ... } 3:  4: // check given a string and index 5: if (char.IsUpper(symbol, 0)) { ... } Obviously, if you just have a char, then you’d just use the first form.  But if you have a string you can use either form equally well. As a side note, care should be taken when examining all the available static methods on the System.Char type, as some seem to be redundant but actually have very different purposes.  For example, there are IsDigit() and IsNumeric() methods, which sound the same on the surface, but give you different results. IsDigit() returns true if it is a base-10 digit character (‘0’, ‘1’, … ‘9’) where IsNumeric() returns true if it’s any numeric character including the characters for ½, ¼, etc. Summary To come full circle back to our opening example, I would have preferred the code be written like this: 1: // grab 5th char and take upper case version of it 2: var type = char.ToUpper(symbol[4]); 3:  4: if (type == 'P') 5: { 6: // ... do something with P type... 7: } Not only is it just as readable (if not more so), but it performs over 3x faster on my machine:    1,000,000 iterations of char method took: 30 ms, 0.000050 ms/item.    1,000,000 iterations of string method took: 101 ms, 0.000101 ms/item. It’s not only immediately faster because we don’t allocate temporary strings, but as an added bonus there less garbage to collect later as well.  To me this qualifies as a case where we are using a common C# performance idiom (don’t create unnecessary temporary strings) to make our code better. Technorati Tags: C#,CSharp,.NET,Little Wonders,char,string

    Read the article

  • Short Season, Long Models - Dealing with Seasonality

    - by Michel Adar
    Accounting for seasonality presents a challenge for the accurate prediction of events. Examples of seasonality include: ·         Boxed cosmetics sets are more popular during Christmas. They sell at other times of the year, but they rise higher than other products during the holiday season. ·         Interest in a promotion rises around the time advertising on TV airs ·         Interest in the Sports section of a newspaper rises when there is a big football match There are several ways of dealing with seasonality in predictions. Time Windows If the length of the model time windows is short enough relative to the seasonality effect, then the models will see only seasonal data, and therefore will be accurate in their predictions. For example, a model with a weekly time window may be quick enough to adapt during the holiday season. In order for time windows to be useful in dealing with seasonality it is necessary that: The time window is significantly shorter than the season changes There is enough volume of data in the short time windows to produce an accurate model An additional issue to consider is that sometimes the season may have an abrupt end, for example the day after Christmas. Input Data If available, it is possible to include the seasonality effect in the input data for the model. For example the customer record may include a list of all the promotions advertised in the area of residence. A model with these inputs will have to learn the effect of the input. It is possible to learn it specific to the promotion – and by the way learn about inter-promotion cross feeding – by leaving the list of ads as it is; or it is possible to learn the general effect by having a flag that indicates if the promotion is being advertised. For inputs to properly represent the effect in the model it is necessary that: The model sees enough events with the input present. For example, by virtue of the model lifetime (or time window) being long enough to see several “seasons” or by having enough volume for the model to learn seasonality quickly. Proportional Frequency If we create a model that ignores seasonality it is possible to use that model to predict how the specific person likelihood differs from average. If we have a divergence from average then we can transfer that divergence proportionally to the observed frequency at the time of the prediction. Definitions: Ft = trailing average frequency of the event at time “t”. The average is done over a suitable period of to achieve a statistical significant estimate. F = average frequency as seen by the model. L = likelihood predicted by the model for a specific person Lt = predicted likelihood proportionally scaled for time “t”. If the model is good at predicting deviation from average, and this holds over the interesting range of seasons, then we can estimate Lt as: Lt = L * (Ft / F) Considering that: L = (L – F) + F Substituting we get: Lt = [(L – F) + F] * (Ft / F) Which simplifies to: (i)                  Lt = (L – F) * (Ft / F)  +  Ft This latest expression can be understood as “The adjusted likelihood at time t is the average likelihood at time t plus the effect from the model, which is calculated as the difference from average time the proportion of frequencies”. The formula above assumes a linear translation of the proportion. It is possible to generalize the formula using a factor which we will call “a” as follows: (ii)                Lt = (L – F) * (Ft / F) * a  +  Ft It is also possible to use a formula that does not scale the difference, like: (iii)               Lt = (L – F) * a  +  Ft While these formulas seem reasonable, they should be taken as hypothesis to be proven with empirical data. A theoretical analysis provides the following insights: The Cumulative Gains Chart (lift) should stay the same, as at any given time the order of the likelihood for different customers is preserved If F is equal to Ft then the formula reverts to “L” If (Ft = 0) then Lt in (i) and (ii) is 0 It is possible for Lt to be above 1. If it is desired to avoid going over 1, for relatively high base frequencies it is possible to use a relative interpretation of the multiplicative factor. For example, if we say that Y is twice as likely as X, then we can interpret this sentence as: If X is 3%, then Y is 6% If X is 11%, then Y is 22% If X is 70%, then Y is 85% - in this case we interpret “twice as likely” as “half as likely to not happen” Applying this reasoning to (i) for example we would get: If (L < F) or (Ft < (1 / ((L/F) + 1)) Then  Lt = L * (Ft / F) Else Lt = 1 – (F / L) + (Ft * F / L)  

    Read the article

  • How to display a dependent list box disabled if no child data exist

    - by frank.nimphius
    A requirement on OTN was to disable the dependent list box of a model driven list of value configuration whenever the list is empty. To disable the dependent list, the af:selectOneChoice component needs to be refreshed with every value change of the parent list, which however already is the case as the list boxes are already dependent. When you create model driven list of values as choice lists in an ADF Faces page, two ADF list bindings are implicitly created in the PageDef file of the page that hosts the input form. At runtime, a list binding is an instance of FacesCtrlListBinding, which exposes getItems() as a method to access a list of available child data (java.util.List). Using Expression Language, the list is accessible with #{bindings.list_attribute_name.items} To dynamically set the disabled property on the dependent af:selectOneChoice component, however, you need a managed bean that exposes the following two methods //empty – but required – setter method public void setIsEmpty(boolean isEmpty) {} //the method that returns true/false when the list is empty or //has values public boolean isIsEmpty() {   FacesContext fctx = FacesContext.getCurrentInstance();   ELContext elctx = fctx.getELContext();   ExpressionFactory exprFactory =                          fctx.getApplication().getExpressionFactory();   ValueExpression vexpr =                       exprFactory.createValueExpression(elctx,                         "#{bindings.EmployeeId.items}",                       Object.class);   List employeesList = (List) vexpr.getValue(elctx);                        return employeesList.isEmpty()? true : false;      } If referenced from the dependent choice list, as shown below, the list is disabled whenever it contains no list data <! --  master list --> <af:selectOneChoice value="#{bindings.DepartmentId.inputValue}"                                  label="#{bindings.DepartmentId.label}"                                  required="#{bindings.DepartmentId.hints.mandatory}"                                   shortDesc="#{bindings.DepartmentId.hints.tooltip}"                                   id="soc1" autoSubmit="true">      <f:selectItems value="#{bindings.DepartmentId.items}" id="si1"/> </af:selectOneChoice> <! --  dependent  list --> <af:selectOneChoice value="#{bindings.EmployeeId.inputValue}"                                   label="#{bindings.EmployeeId.label}"                                      required="#{bindings.EmployeeId.hints.mandatory}"                                   shortDesc="#{bindings.EmployeeId.hints.tooltip}"                                   id="soc2" disabled="#{lovTestbean.isEmpty}"                                   partialTriggers="soc1">     <f:selectItems value="#{bindings.EmployeeId.items}" id="si2"/> </af:selectOneChoice>

    Read the article

  • Mocking property sets

    - by mehfuzh
    In this post, i will be showing how you can mock property sets with your expected values or even action using JustMock. To begin, we have a sample interface: public interface IFoo {     int Value { get; set; } } Now,  we can create a mock that will throw on any call other than the one expected, generally its a strict mock and we can do it like: bool expected = false;  var foo = Mock.Create<IFoo>(BehaviorMode.Strict);  Mock.ArrangeSet(() => { foo.Value = 1; }).DoInstead(() => expected  = true);    foo.Value = 1;    Assert.True(expected); Here , the method for running though our expectation for set is Mock.ArrangeSet , where we can directly set our expectations or can even set matchers into it like: var foo = Mock.Create<IFoo>(BehaviorMode.Strict);   Mock.ArrangeSet(() => foo.Value = Arg.Matches<int>(x => x > 3));   foo.Value = 4; foo.Value = 5;   Assert.Throws<MockException>(() => foo.Value = 3);   In the example, any set for value not satisfying matcher expression will throw an MockException as this is a strict mock but what will be the case for loose mocks, where we also have to assert it. Here, let’s take an interface with an indexed property. Indexers are treated in the same way as properties, as with basic indexers let you access your class if it were an array. public interface IFooIndexed {     string this[int key] { get; set; } } We want to  setup a value for a particular index,  we then will pass that mock to some implementer where it will be actually called. Once done, we want to assert that if it has been invoked properly. var foo = Mock.Create<IFooIndexed>();   Mock.ArrangeSet(() => foo[0] = "ping");   foo[0] = "ping";   Mock.AssertSet(() => foo[0] = "ping"); In the above example, both the values are user defined, it might happen that we want to make it more dynamic, In this example, i set it up for set with any value and finally checked if it is set with the one i am looking for. var foo = Mock.Create<IFooIndexed>();   Mock.ArrangeSet(() => foo[0] = Arg.Any<string>());   foo[0] = "ping";   Mock.AssertSet(() => foo[0] = Arg.Matches<string>(x => string.Compare("ping", x) == 0)); This is more or less of mocking user sets , but we can further have it to throw exception or even do our own task for a particular set , like : Mock.ArrangeSet(() => foo.MyProperty = 10).Throws(new ArgumentException()); Or  bool expected = false;  var foo = Mock.Create<IFoo>(BehaviorMode.Strict);  Mock.ArrangeSet(() => { foo.Value = 1; }).DoInstead(() => expected  = true);    foo.Value = 1;    Assert.True(expected); Or call the original setter , in this example it will throw an NotImplementedExpectation var foo = Mock.Create<FooAbstract>(BehaviorMode.Strict); Mock.ArrangeSet(() => { foo.Value = 1; }).CallOriginal(); Assert.Throws<NotImplementedException>(() => { foo.Value = 1; });   Finally, try all these, find issues, post them to forum and make it work for you :-). Hope that helps,

    Read the article

  • Build 2012, some thoughts..

    - by Dennis Vroegop
    I think you probably read my rant about the logistics at Build 2012, as posted here, so I am not going into that anymore. Instead, let’s look at the content. (BTW If you did read that post and want some more info then read Nia Angelina’s post about Build. I have nothing to add to that.) As usual, there were good speakers and some speakers who could benefit from some speaker training. I find it hard to understand why Microsoft allows certain people on stage, people who speak English with such strong accents it’s hard for people, especially from abroad, to understand. Some basic training might be useful for some of them. However, it is nice to see that most speakers are project managers, program managers or even devs on the teams that build the stuff they talk about: there was a lot of knowledge on stage! And that means when you ask questions you get very relevant information. I realize I am not the average audience member here, I am regular speaker myself so I tend to look for other things when I am in a room than most audience members so my opinion might differ from others. All in all the knowledge of the speakers was above average but the presentation skills were most of the times below what I would describe as adequate. But let us look at the contents. Since the official name of the conference is Build Windows 2012 it is not surprising most of the talks were focused on building Windows 8 apps. Next to that, there was a lot of focus on Azure and of course Windows Phone 8 that launched the day before Build started. Most sessions dealt with C# and JavaScript although I did see a tendency to use C++ more. Touch. Well, that was the focus on a lot of sessions, that goes without saying. Microsoft is really betting on Touch these days and being a Touch oriented developer I can only applaud this. The term NUI is getting a bit outdated but the principles behind it certainly aren’t. The sessions did cover quite a lot on how to make your applications easy to use and easy to understand. However, not all is touch nowadays; still the majority of people use keyboard and mouse to interact with their machines (or, as I do, use keyboard, mouse AND touch at the same time). Microsoft understands this and has spend some serious thoughts on this as well. It was all about making your apps run everywhere on all sorts of devices and in all sorts of scenarios. I have seen a couple of sessions focusing on the portable class library and on sharing code between Windows 8 and Windows Phone 8. You get the feeling Microsoft is enabling us devs to write software that will be ubiquitous. They want your stuff to be all over the place and they do anything they can to help. To achieve that goal they provide us with brilliant SDK’s, great tooling, a very, very good backend in the form of Windows Azure (I was particularly impressed by the Mobility part of Azure) and some fantastic hardware. And speaking of hardware: the partners such as Acer, Lenovo and Dell are making hardware that puts Apple to a shame nowadays. To illustrate: in Bellevue (very close to Redmond where Microsoft HQ is) they have the Microsoft Store located very close to the Apple Store, so it’s easy to compare devices. And I have to say: the Microsoft offerings are much, much more appealing that what the Cupertino guys have to offer. That was very visible by the number of people visiting the stores: even on the day that Apple launched the iPad Mini there were more people in the Microsoft store than in the Apple store. So, the future looks like it’s going to be fun. Great hardware (did I mention the Nokia Lumia 920? No? It’s brilliant), great software (Windows 8 is in a league of its own), the best dev tools (Visual Studio 2012 is still the champion here) and a fantastic backend (Azure.. need I say more?). It’s up to us devs to fill up the stores with applications that matches this. To summarize: it is great to be a Windows developer. PS. Did I mention Surface RT? Man….. People were drooling all over it wherever I went. It is fantastic :-) Technorati Tags: Build,Windows 8,Windows Phone,Lumia,Surface,Microsoft

    Read the article

  • Oracle Fusion Middleware gives you Choice and Portability for Public and Private Cloud

    - by Michelle Kimihira
    Author: Margaret Lee, Senior Director, Product Management, Oracle Fusion Middleware Cloud Computing allows customers to quickly develop and deploy applications in a shared environment.  The environment can span across hardward (IaaS), foundation layer software (PaaS), and end-user software (SaaS). Cloud Computing provides compelling benefits in terms of business agility and IT cost savings.  However, with complex, existing heterogeneous architectures, and concerns for security and manageability, enterprises are challenged to define their Cloud strategy.  For most enterprises, the solution is a hybrid of private and public cloud.  Fusion Middleware supports customers’ Cloud requirements through choice and portability. Fusion Middleware supports a variety of cloud development and deployment models:  Oracle [Public] Cloud; customer private cloud; hybrid of these two, and traditional dedicated, on-premise model Customers can develop applications in any of these models and deployed in another, providing the flexibility and portability they need Oracle Cloud is a public cloud offering.  Within Oracle Cloud, Fusion Middleware provides two key offerings include the Developer cloud service and Java cloud deployment service. Developer Cloud Service Simplify Development: Automated provisioned environment; pre-configured and integrated; web-based administration Deploy Automatically: Fully integrated with Oracle Cloud for Java deployment; workflow ensures build & test Collaborate & Manage: Fits any size team; integrated team source repository; continuous integration; task/defect tracking Integrated with all major IDEs: Oracle JDeveloper; NetBeans; Eclipse Java Cloud Service Java Cloud service provides flexible Java deployment environment for departmental applications and development, staging, QA, training, and demo environments.  It also supports customizations deployments for SaaS-based Fusion Applications customers.  Some key features of Java Cloud Service include: WebLogic Server on Exalogic, secure, highly available infrastructure Database Service & IDE Integration Open, Standard-based Deploy Web Apps, Web Services, REST Services Fully managed and supported by Oracle For more information, please visit Oracle Cloud, Oracle Cloud Java Service and Oracle Cloud Developer Service. If your enterprise prefers a private cloud, for reasons such as security, control, manageability, and complex integration that prevent your applications from being deployed on a public cloud, Fusion Middleware also provide you with the products and tools you need.  Sometimes called Private PaaS, private clouds have their predecessors in shared-services arrangements many large companies have been building in the past decade.  The difference, however, are in the scope of the services, and depth of their capabilities.  In terms of vertical stack depth, private clouds not only provide hardware and software infrastructure to run your applications, they also provide services such as integration and security, that your applications need.  Horizontally, private clouds provide monitoring, management, lifecycle, and charge back capabilities out-of-box that shared-services platforms did not have before. Oracle Fusion Middleware includes the complete stack of hardware and software for you to build private clouds: SOA suite and BPM suite to support systems integration and process flow between applications deployed on your private cloud and the rest of your organization Identity and Access Management suite to provide security, provisioning, and access services for applications deployed on your private cloud WebLogic Server to run your applications Enterprise Manager's Cloud Management pack to monitor, manage, upgrade applications running on your private cloud Exalogic or optimized Oracle-Sun hardware to build out your private cloud The most important key differentiator for Oracle's cloud solutions is portability, between private and public clouds.  This is unique to Oracle because portability requires the vendor to have product depth and breadth in both public cloud services and private cloud product offerings.  Most public cloud vendors cannot provide the infrastructure and tools customers need to build their own private clouds.  In reverse, traditional software tools vendors typically do not have the product and expertise breadth to build out and offer a public cloud.  Oracle can.  It is important for customers that the products and technologies  Oracle uses to build its public is the same set that it sells to customers for them to build private clouds.  Fundamentally, that enables skills reuse,  as well as application portability. For more information on Oracle PaaS offerings, please visit Oracle's product information page.    Resources Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Computer Networks UNISA - Chap 14 &ndash; Insuring Integrity &amp; Availability

    - by MarkPearl
    After reading this section you should be able to Identify the characteristics of a network that keep data safe from loss or damage Protect an enterprise-wide network from viruses Explain network and system level fault tolerance techniques Discuss issues related to network backup and recovery strategies Describe the components of a useful disaster recovery plan and the options for disaster contingencies What are integrity and availability? Integrity – the soundness of a networks programs, data, services, devices, and connections Availability – How consistently and reliably a file or system can be accessed by authorized personnel A number of phenomena can compromise both integrity and availability including… security breaches natural disasters malicious intruders power flaws human error users etc Although you cannot predict every type of vulnerability, you can take measures to guard against the most damaging events. The following are some guidelines… Allow only network administrators to create or modify NOS and application system users. Monitor the network for unauthorized access or changes Record authorized system changes in a change management system’ Install redundant components Perform regular health checks on the network Check system performance, error logs, and the system log book regularly Keep backups Implement and enforce security and disaster recovery policies These are just some of the basics… Malware Malware refers to any program or piece of code designed to intrude upon or harm a system or its resources. Types of Malware… Boot sector viruses Macro viruses File infector viruses Worms Trojan Horse Network Viruses Bots Malware characteristics Some common characteristics of Malware include… Encryption Stealth Polymorphism Time dependence Malware Protection There are various tools available to protect you from malware called anti-malware software. These monitor your system for indications that a program is performing potential malware operations. A number of techniques are used to detect malware including… Signature Scanning Integrity Checking Monitoring unexpected file changes or virus like behaviours It is important to decide where anti-malware tools will be installed and find a balance between performance and protection. There are several general purpose malware policies that can be implemented to protect your network including… Every compute in an organization should be equipped with malware detection and cleaning software that regularly runs Users should not be allowed to alter or disable the anti-malware software Users should know what to do in case the anti-malware program detects a malware virus Users should be prohibited from installing any unauthorized software on their systems System wide alerts should be issued to network users notifying them if a serious malware virus has been detected. Fault Tolerance Besides guarding against malware, another key factor in maintaining the availability and integrity of data is fault tolerance. Fault tolerance is the ability for a system to continue performing despite an unexpected hardware or software malfunction. Fault tolerance can be realized in varying degrees, the optimal level of fault tolerance for a system depends on how critical its services and files are to productivity. Generally the more fault tolerant the system, the more expensive it is. The following describe some of the areas that need to be considered for fault tolerance. Environment (Temperature and humidity) Power Topology and Connectivity Servers Storage Power Typical power flaws include Surges – a brief increase in voltage due to lightening strikes, solar flares or some idiot at City Power Noise – Fluctuation in voltage levels caused by other devices on the network or electromagnetic interference Brownout – A sag in voltage for just a moment Blackout – A complete power loss The are various alternate power sources to consider including UPS’s and Generators. UPS’s are found in two categories… Standby UPS – provides continuous power when mains goes down (brief period of switching over) Online UPS – is online all the time and the device receives power from the UPS all the time (the UPS is charged continuously) Servers There are various techniques for fault tolerance with servers. Server mirroring is an option where one device or component duplicates the activities of another. It is generally an expensive process. Clustering is a fault tolerance technique that links multiple servers together to appear as a single server. They share processing and storage responsibilities and if one unit in the cluster goes down, another unit can be brought in to replace it. Storage There are various techniques available including the following… RAID Arrays NAS (Storage (Network Attached Storage) SANs (Storage Area Networks) Data Backup A backup is a copy of data or program files created for archiving or safekeeping. Many different options for backups exist with various media including… These vary in cost and speed. Optical Media Tape Backup External Disk Drives Network Backups Backup Strategy After selecting the appropriate tool for performing your servers backup, devise a backup strategy to guide you through performing reliable backups that provide maximum data protection. Questions that should be answered include… What data must be backed up At what time of day or night will the backups occur How will you verify the accuracy of the backups Where and for how long will backup media be stored Who will take responsibility for ensuring that backups occurred How long will you save backups Where will backup and recovery documentation be stored Different backup methods provide varying levels of certainty and corresponding labour cost. There are also different ways to determine which files should be backed up including… Full backup – all data on all servers is copied to storage media Incremental backup – Only data that has changed since the last full or incremental backup is copied to a storage medium Differential backup – Only data that has changed since the last backup is coped to a storage medium Disaster Recovery Disaster recovery is the process of restoring your critical functionality and data after an enterprise wide outage has occurred. A disaster recovery plan is for extreme scenarios (i.e. fire, line fault, etc). A cold site is a place were the computers, devices, and connectivity necessary to rebuild a network exist but they are not appropriately configured. A warm site is a place where the computers, devices, and connectivity necessary to rebuild a network exists with some appropriately configured devices. A hot site is a place where the computers, devices, and connectivity necessary to rebuild a network exists and all are appropriately configured.

    Read the article

  • WebSocket API 1.1 released!

    - by Pavel Bucek
    Its my please to announce that JSR 356 – Java API for WebSocket maintenance release ballot vote finished with majority of “yes” votes (actually, only one eligible voter did not vote, all other votes were “yeses”). New release is maintenance release and it addresses only one issue:  WEBSOCKET_SPEC-226. What changed in the 1.1? Version 1.1 is fully backwards compatible with version 1.0, there are only two methods added to javax.websocket.Session: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 /** * Register to handle to incoming messages in this conversation. A maximum of one message handler per * native websocket message type (text, binary, pong) may be added to each Session. I.e. a maximum * of one message handler to handle incoming text messages a maximum of one message handler for * handling incoming binary messages, and a maximum of one for handling incoming pong * messages. For further details of which message handlers handle which of the native websocket * message types please see {@link MessageHandler.Whole} and {@link MessageHandler.Partial}. * Adding more than one of any one type will result in a runtime exception. * * @param clazz   type of the message processed by message handler to be registered. * @param handler whole message handler to be added. * @throws IllegalStateException if there is already a MessageHandler registered for the same native *                               websocket message type as this handler. */ public void addMessageHandler(Class<T> clazz, MessageHandler.Whole<T> handler); /** * Register to handle to incoming messages in this conversation. A maximum of one message handler per * native websocket message type (text, binary, pong) may be added to each Session. I.e. a maximum * of one message handler to handle incoming text messages a maximum of one message handler for * handling incoming binary messages, and a maximum of one for handling incoming pong * messages. For further details of which message handlers handle which of the native websocket * message types please see {@link MessageHandler.Whole} and {@link MessageHandler.Partial}. * Adding more than one of any one type will result in a runtime exception. * * * @param clazz   type of the message processed by message handler to be registered. * @param handler partial message handler to be added. * @throws IllegalStateException if there is already a MessageHandler registered for the same native *                               websocket message type as this handler. */ public void addMessageHandler(Class<T> clazz, MessageHandler.Partial<T> handler); Why do we need to add those methods? Short and not precise version: to support Lambda expressions as MessageHandlers. Longer and slightly more precise explanation: old Session#addMessageHandler method (which is still there and works as it worked till now) does rely on getting the generic parameter during the runtime, which is not (always) possible. The unfortunate part is that it works for some common cases and the expert group did not catch this issue before 1.0 release because of that. The issue is really clearly visible when Lambdas are used as message handlers: 1 2 3 session.addMessageHandler(message -> { System.out.println("### Received: " + message); }); There is no way for the JSR 356 implementation to get the type of the used Lambda expression, thus this call will always result in an exception. Since all modern IDEs do recommend to use Lambda expressions when possible and MessageHandler interfaces are single method interfaces, it basically just scream “use Lambdas” all over the place but when you do that, the application will fail during runtime. Only solution we currently have is to explicitly provide the type of registered MessageHandler. (There might be another sometime in the future when generic type reification is introduced, but that is not going to happen soon enough). So the example above will then be: 1 2 3 session.addMessageHandler(String.class, message -> { System.out.println("### Received: " + message); }); and voila, it works. There are some limitations – you cannot do 1 List<String>.class , so you will need to encapsulate these types when you want to use them in MessageHandler implementation (something like “class MyType extends ArrayList<String>”). There is no better way how to solve this issue, because Java currently does not provide good way how to describe generic types. The api itself is available on maven central, look for javax.websocket:javax.websocket-api:1.1. The reference implementation is project Tyrus, which implements WebSocket API 1.1 from version 1.8.

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >