Search Results

Search found 1010 results on 41 pages for 'reuse'.

Page 39/41 | < Previous Page | 35 36 37 38 39 40 41  | Next Page >

  • add an image in listview

    - by danish
    Hi I would like to add more images in my list view as this code below only displays image 1 and 2 continuously in each row. What I want to do is display a different image for each different row. Here is mycode below; Thanks for any help. I am not good at java please change the code where necessary and I can then refer to it. public class starters extends ListActivity { private static class EfficientAdapter extends BaseAdapter { private LayoutInflater mInflater; private Bitmap mIcon1; private Bitmap mIcon2; private Bitmap mIcon3; private Bitmap mIcon4; private Bitmap mIcon5; private Bitmap mIcon6; private Bitmap mIcon7; private Bitmap mIcon8; private Bitmap mIcon9; private Bitmap mIcon10; public EfficientAdapter(Context context) { // Cache the LayoutInflate to avoid asking for a new one each time. mInflater = LayoutInflater.from(context); // Icons bound to the rows. mIcon1 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters1); mIcon2 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters2); mIcon3 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters3); mIcon4 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters4); mIcon5 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters5); mIcon6 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters6); mIcon7 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters7); mIcon8 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters8); mIcon9 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters9); mIcon10 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters10); } public int getCount() { return DATA.length; } public Object getItem(int position) { return position; } public long getItemId(int position) { return position; } public View getView(int position, View convertView, ViewGroup parent) { // A ViewHolder keeps references to children views to avoid unneccessary calls // to findViewById() on each row. ViewHolder holder; // When convertView is not null, we can reuse it directly, there is no need // to reinflate it. We only inflate a new View when the convertView supplied // by ListView is null. if (convertView == null) { convertView = mInflater.inflate(R.layout.starters, null); // Creates a ViewHolder and store references to the two children views // we want to bind data to. holder = new ViewHolder(); holder.text = (TextView) convertView.findViewById(R.id.text01); holder.text = (TextView) convertView.findViewById(R.id.secondLine); holder.icon = (ImageView) convertView.findViewById(R.id.icon01); convertView.setTag(holder); } else { // Get the ViewHolder back to get fast access to the TextView // and the ImageView. holder = (ViewHolder) convertView.getTag(); } // Bind the data efficiently with the holder. holder.text.setText(DATA[position]); holder.icon.setImageBitmap((position & 1) ==1 ? mIcon1 : mIcon2); return convertView; } static class ViewHolder { TextView text; ImageView icon; } } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setListAdapter(new EfficientAdapter(this)); } private static final String[] DATA = { "Original nachos", "Toasted chicken and cheese quesadillas", "Chicken, lime and coriander nachos", "Spicy bean and cheese quesadillas", "Tuna and corn quesadillas", "Cheesy bean and sweetcorn nachos", "Crispy chicken, avocado and lime salad", "Beef and baby corn tostada", "Spicy mexican rice with chicken and prawns", "Chilli potato boats"}; }

    Read the article

  • Adventures in Windows 8: Placing items in a GridView with a ColumnSpan or RowSpan

    - by Laurent Bugnion
    Currently working on a Windows 8 app for an important client, I will be writing about small issues, tips and tricks, ideas and whatever occurs to me during the development and the integration of this app. When working with a GridView, it is quite common to use a VariableSizedWrapGrid as the ItemsPanel. This creates a nice flowing layout which will auto-adapt for various resolutions. This is ideal when you want to build views like the Windows 8 start menu. However immediately we notice that the Start menu allows to place items on one column (Smaller) or two columns (Larger). This switch happens through the AppBar. So how do we implement that in our app? Using ColumnSpan and RowSpan When you use a VariableSizedWrapGrid directly in your XAML, you can attach the VariableSizedWrapGrid.ColumnSpan and VariableSizedWrapGrid.RowSpan attached properties directly to an item to create the desired effect. For instance this code create this output (shown in Blend but it runs just the same): <VariableSizedWrapGrid ItemHeight="100" ItemWidth="100" Width="200" Orientation="Horizontal"> <Rectangle Fill="Purple" /> <Rectangle Fill="Orange" /> <Rectangle Fill="Yellow" VariableSizedWrapGrid.ColumnSpan="2" /> <Rectangle Fill="Red" VariableSizedWrapGrid.ColumnSpan="2" VariableSizedWrapGrid.RowSpan="2" /> <Rectangle Fill="Green" VariableSizedWrapGrid.RowSpan="2" /> <Rectangle Fill="Blue" /> <Rectangle Fill="LightGray" /> </VariableSizedWrapGrid> Using the VariableSizedWrapGrid as ItemsPanel When you use a GridView however, you typically bind the ItemsSource property to a collection, for example in a viewmodel. In that case, you want to be able to switch the ColumnSpan and RowSpan depending on properties on the item. I tried to find a way to bind the VariableSizedWrapGrid.ColumnSpan attached property on the GridView’s ItemContainerStyle template to an observable property on the item, but it didn’t work. Instead, I decided to use a StyleSelector to switch the GridViewItem’s style. Here’s how: First I added my two GridViews to my XAML as follows: <Page.Resources> <local:MainViewModel x:Key="Main" /> <DataTemplate x:Key="DataTemplate1"> <Grid Background="{Binding Brush}"> <TextBlock Text="{Binding BrushCode}" /> </Grid> </DataTemplate> </Page.Resources> <Page.DataContext> <Binding Source="{StaticResource Main}" /> </Page.DataContext> <Grid Background="{StaticResource ApplicationPageBackgroundThemeBrush}" Margin="20"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <GridView ItemsSource="{Binding Items}" ItemTemplate="{StaticResource DataTemplate1}" VerticalAlignment="Top"> <GridView.ItemsPanel> <ItemsPanelTemplate> <VariableSizedWrapGrid ItemHeight="150" ItemWidth="150" /> </ItemsPanelTemplate> </GridView.ItemsPanel> </GridView> <GridView Grid.Column="1" ItemsSource="{Binding Items}" ItemTemplate="{StaticResource DataTemplate1}" VerticalAlignment="Top"> <GridView.ItemsPanel> <ItemsPanelTemplate> <VariableSizedWrapGrid ItemHeight="100" ItemWidth="100" /> </ItemsPanelTemplate> </GridView.ItemsPanel> </GridView> </Grid> The MainViewModel looks like this: public class MainViewModel { public IList<Item> Items { get; private set; } public MainViewModel() { Items = new List<Item> { new Item { Brush = new SolidColorBrush(Colors.Red) }, new Item { Brush = new SolidColorBrush(Colors.Blue) }, new Item { Brush = new SolidColorBrush(Colors.Green), }, // And more... }; } } As for the Item class, I am using an MVVM Light ObservableObject but you can use your own simple implementation of INotifyPropertyChanged of course: public class Item : ObservableObject { public const string ColSpanPropertyName = "ColSpan"; private int _colSpan = 1; public int ColSpan { get { return _colSpan; } set { Set(ColSpanPropertyName, ref _colSpan, value); } } public SolidColorBrush Brush { get; set; } public string BrushCode { get { return Brush.Color.ToString(); } } } Then I copied the GridViewItem’s style locally. To do this, I use Expression Blend’s functionality. It has the disadvantage to copy a large portion of XAML to your application, but the HUGE advantage to allow you to change the look and feel of your GridViewItem everywhere in the application. For example, you can change the selection chrome, the item’s alignments and many other properties. Actually everytime I use a ListBox, ListView or any other data control, I typically copy the item style to a resource dictionary in my application and I tweak it. Note that Blend for Windows 8 apps is automatically installed with every edition of Visual Studio 2012 (including Express) so you have no excuses anymore not to use Blend :) Open MainPage.xaml in Expression Blend by right clicking on the MainPage.xaml file in the Solution Explorer and selecting Open in Blend from the context menu. Note that the items do not look very nice! The reason is that the default ItemContainerStyle sets the content’s alignment to “Center” which I never quite understood. Seems to me that you rather want the content to be stretched, but anyway it is easy to change.   Right click on the GridView on the left and select Edit Additional Templates, Edit Generated Item Container (ItemContainerStyle), Edit a Copy. In the Create Style Resource dialog, enter the name “DefaultGridViewItemStyle”, select “Application” and press OK. Side note 1: You need to save in a global resource dictionary because later we will need to retrieve that Style from a global location. Side note 2": I would rather copy the style to an external resource dictionary that I link into the App.xaml file, but I want to keep things simple here. Blend switches in Template edit mode. The template you are editing now is inside the ItemContainerStyle and will govern the appearance of your items. This is where, for instance, the “checked” chrome is defined, and where you can alter it if you need to. Note that you can reuse this style for all your GridViews even if you use a different DataTemplate for your items. Makes sense? I probably need to think about writing another blog post dedicated to the ItemContainerStyle :) In the breadcrumb bar on top of the page, click on the style icon. The property we want to change now can be changed in the Style instead of the Template, which is a better idea. Blend is not in Style edit mode, as you can see in the Objects and Timeline pane. In the Properties pane, in the Search box, enter the word “content”. This will filter all the properties containing that partial string, including the two we are interested in: HorizontalContentAlignment and VerticalContentAlignment. Set these two values to “Stretch” instead of the default “Center”. Using the breadcrumb bar again, set the scope back to the Page (by clicking on the first crumb on the left). Notice how the items are now showing as squares in the first GridView. We will now use the same ItemContainerStyle for the second GridView. To do this, right click on the second GridView and select Edit Additional Templates, Edit Generate Item Container, Apply Resource, DefaultGridViewItemStyle. The page now looks nicer: And now for the ColumnSpan! So now, let’s change the ColumnSpan property. First, let’s define a new Style that inherits the ItemContainerStyle we created before. Make sure that you save everything in Blend by pressing Ctrl-Shift-S. Open App.xaml in Visual Studio. Below the newly created DefaultGridViewItemStyle resource, add the following style: <Style x:Key="WideGridViewItemStyle" TargetType="GridViewItem" BasedOn="{StaticResource DefaultGridViewItemStyle}"> <Setter Property="VariableSizedWrapGrid.ColumnSpan" Value="2" /> </Style> Add a new class to the project, and name it MainItemStyleSelector. Implement the class as follows: public class MainItemStyleSelector : StyleSelector { protected override Style SelectStyleCore(object item, DependencyObject container) { var i = (Item)item; if (i.ColSpan == 2) { return Application.Current.Resources["WideGridViewItemStyle"] as Style; } return Application.Current.Resources["DefaultGridViewItemStyle"] as Style; } } In MainPage.xaml, add a resource to the Page.Resources section: <local:MainItemStyleSelector x:Key="MainItemStyleSelector" /> In MainPage.xaml, replace the ItemContainerStyle property on the first GridView with the ItemContainerStyleSelector property, pointing to the StaticResource we just defined. <GridView ItemsSource="{Binding Items}" ItemTemplate="{StaticResource DataTemplate1}" VerticalAlignment="Top" ItemContainerStyleSelector="{StaticResource MainItemStyleSelector}"> <GridView.ItemsPanel> <ItemsPanelTemplate> <VariableSizedWrapGrid ItemHeight="150" ItemWidth="150" /> </ItemsPanelTemplate> </GridView.ItemsPanel> </GridView> Do the same for the second GridView as well. Finally, in the MainViewModel, change the ColumnSpan property on the 3rd Item to 2. new Item { Brush = new SolidColorBrush(Colors.Green), ColSpan = 2 }, Running the application now creates the following image, which is what we wanted. Notice how the green item is now a “wide tile”. You can also experiment by creating different Styles, all inheriting the DefaultGridViewItemStyle and using different values of RowSpan for instance. This will allow you to create any layout you want, while leaving the heavy lifting of “flowing the layout” to the GridView control. What about changing these values dynamically? Of course as we can see in the Start menu, it would be nice to be able to change the ColumnSpan and maybe even the RowSpan values at runtime. Unfortunately at this time I have not found a good way to do that. I am investigating however and will make sure to post a follow up when I find what I am looking for!   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • CodePlex Daily Summary for Thursday, July 25, 2013

    CodePlex Daily Summary for Thursday, July 25, 2013Popular ReleasesEnglish Practice Helper: English Practice Helper Demo v1.0: The first demoTweetinvi a friendly C# Twitter API: Alpha 0.8.0.1: This is the first release of Tweetinvi. Please report any issue in the discussion or issues. Sincerely, LinviKartris E-commerce: Kartris v2.5003: This fixes an issue where search engines appear to identify as IE and so trigger the noIE page if there is not a non-responsive skin specified.VG-Ripper & PG-Ripper: VG-Ripper 2.9.45: changes NEW: Added Support for "ImgBabes.com" links NEW: Added Support for "ImagesIon.com" linksMagelia WebStore Open-source Ecommerce software: Magelia WebStore 2.4: Magelia WebStore version 2.4 introduces new and improved features: Basket and order calculation have been redesigned with a more modular approach geographic zone algorithms for tax and shipping calculations have been re-developed. The Store service has been split in three services (store, basket, order). Product start and end dates have been added. For variant products a unique code has been introduced for the top (variable) product, product attributes can now be defined at the top ...LogicCircuit: LogicCircuit 2.13.07.22: Logic Circuit - is educational software for designing and simulating logic circuits. Intuitive graphical user interface, allows you to create unrestricted circuit hierarchy with multi bit buses, debug circuits behavior with oscilloscope, and navigate running circuits hierarchy. Changes of this versionYou can make visual elements of the circuit been visible on its symbols. This way you can build composite displays, keyboards and reuse them. Please read about displays for more details http://ww...LINQ to Twitter: LINQ to Twitter v2.1.08: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.AcDown?????: AcDown????? v4.4.3: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.4.3 ?? ??Bilibili????????????? ???????????? ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2.10?...Magick.NET: Magick.NET 6.8.6.601: Magick.NET linked with ImageMagick 6.8.6.6. These zip files are also available as a NuGet package: https://nuget.org/profiles/dlemstra/C# Intellisense for Notepad++: Initial release: Members auto-complete Integration with native Notepad++ Auto-Completion Auto "open bracket" for methods Right-arrow to accept suggestions51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.19.4: One Click Install from NuGet This release introduces the 51Degrees.mobi IIS Vary Header Fix. When Compression and Caching is used in IIS, the Vary header is overwritten, making intelligent caching with dynamic content impossible. Find out more about installing the Vary Header fix. Changes to Version 2.1.19.4Handlers now have a ‘Count’ property. This is an integer value that shows how many devices in the dataset that use that handler. Provider.cs -> GetDeviceInfoByID to address a problem w...SMSGateWay: SMSGateWayLaunch: CMPP,SGIP,SMGP ????SP?? ??,????,????,????,VPS??,?????,???. ??????,????,?????,????......????? ????SPID ,MsgSrc,???,???IP,?????? ?????,?????,??,???? ??QQ:3483305Centrify DirectControl PowerShell Module: 1.5.709: -fix- Computer password is correctly set when preparing Computer account for Self-Service join and hostname is longer than 14 charsDLog: DLog v1.0: Log v1.0SalarDbCodeGenerator: SalarDbCodeGenerator v2.1.2013.0719: Version 2.1.2013.0719 2013/7/19 Pattern Changes: * DapperContext pattern is added. * All patterns are updated to work with one-to-one relations. Changes: * One-to-one relation is supported. * Minor bug fixes.PantheR's GraphX for .NET: GraphX for .NET v0.9.5 BETA: BETA 0.9.5 + Added GraphArea.SaveAsImage() method that supports different image formats + Added GraphArea.UseNativeObjectArrange property. True by default. If set to False it will use different coordinates handling that helps to soften vertex drag issues to the top and left area sides. + Added GraphArea.Translation property. It is needed to get correct translation coordinates when determining object position from the mouse coordinates. + Added new VertexControl.PositionChanged event along wit....NET Code Migrator for Dynamics CRM: v1.0.12: Combined the main macros, generated macros from a sample organization, and the CreateVisualStudioMacros utility into a single package.Player Framework by Microsoft: Player Framework for Windows and WP (v1.3 beta 2): Includes all changes in v1.3 beta 1 Additional support for Windows 8.1 Preview New API (JS): addTextTrack New API (JS): msKeys New API (JS): msPlayToPreferredSourceUri New API (JS): msSetMediaKeys New API (JS): onmsneedkey New API (Xaml): SetMediaStreamSource method New API (Xaml): Stretch property New API (Xaml): StretchChanged event New API (Xaml): AreTransportControlsEnabled property New API (Xaml): IsFullWindow property New API (Xaml): PlayToPreferredSourceUri proper...CodeGen Code Generator: CodeGen 4.2.11: Changes in this release include: Added several new alternate forms of the <FIELD_SELWND> token to provide template developers better control over the case of field selection window names. Also added a new token <FIELD_SELWND_ORIGINAL> to preserve the case of selection window names in the same way that <FIELD_SELWND> used to. Enhanced UI Toolkit window script selection window processing (-ws) so that selection window names are no longer case sensitive (they aren't in UI Toolkit). Also the -w...Outlook 2013 Add-In: Multiple Calendars: As per popular request, this new version includes: - Support for multiple calendars. This can be enabled in the configuration by choosing which ones to show/hide appointments from. In some cases (public folders) it may time out and crash, and so far it only supports "My Calendars", so not shared ones yet. Also they're currently shown in the same font/color so there are no confusions with color categories, but please drop me a line on any suggestions you'd like to see implemented. - Added fri...New ProjectsActive Directory Web App Availability Monitor for SCOM 2012: Extend the web application availability monitoring in System Center Operations Manager 2012 with the ability to monitor Active Directory Authenticated Web SitesArgoWinform: ArgoWinform ASP.NET Routing Optional Parameters in body of Urls: This library allows ASP.NET MVC developers to have custom parameters in the body of Urls. Routes that are defined by this library accepts a LookupService that wAutomate Variations in SharePoint 2013 using C#: Automating variations settings and configuration so whenever a deployment is needed, you will be ready with your PowerShell scripts.Automate Variations in SharePoint 2013 using PowerShell: Automating variations settings and configuration so whenever a deployment is needed, you will be ready with your PowerShell scripts.BingosoftSwimming: ?????????ContentCreator for Orchard: Orchard Module that is suposed to replace the Orchard.ContentPicker module by letting you create contentItems directlyGET - General Emitter Templates: GET: General Emitter Templates. Language: C# Given a data file (in xml format), it models the data according with the directives specified in the template file.GTS Project: This project is a private personal educational project to study and learn DNN and ASP.NET MVCJustCompare: Just Compare is human made tool to help humans to compare two digital files. We are now concentrating on source code files only. Media Tool: A collection of tool to manage media files (Pictures and videos)MVVMlight Navigation Service for Windows Phone 8: A simple project to make navigation in MVVMlight on windows phone faster.Qibla Compass for Windows Phone: Qibla compass is one of the mostly used community applications/utility among Muslims. It allows users to determine the correct location / direction of Ka’aba.QuickWebLauncherWP: Una aplicación para windows phone para leer fuentes de sindicación y coleccionar enlaces a los artículos que se desean leerSimpleBingWallpaper: It's a simple project for get bingwallpaper. if you like it. please +1.. ThanksStyrApp: Template Application for technology testingTataFm: Tata FM for Windows 8 app.TimeZoneInfoForm: Exercises timezone related classes and methods in .NET. It can be used as sample code or to troubleshoot suspected timezone issues in .NET.Wix Test: *WIX Test Solution* - is a simple WIX solution for learning some new features. This project is currently in setup mode and free.

    Read the article

  • Inside the DLR – Invoking methods

    - by Simon Cooper
    So, we’ve looked at how a dynamic call is represented in a compiled assembly, and how the dynamic lookup is performed at runtime. The last piece of the puzzle is how the resolved method gets invoked, and that is the subject of this post. Invoking methods As discussed in my previous posts, doing a full lookup and bind at runtime each and every single time the callsite gets invoked would be far too slow to be usable. The results obtained from the callsite binder must to be cached, along with a series of conditions to determine whether the cached result can be reused. So, firstly, how are the conditions represented? These conditions can be anything; they are determined entirely by the semantics of the language the binder is representing. The binder has to be able to return arbitary code that is then executed to determine whether the conditions apply or not. Fortunately, .NET 4 has a neat way of representing arbitary code that can be easily combined with other code – expression trees. All the callsite binder has to return is an expression (called a ‘restriction’) that evaluates to a boolean, returning true when the restriction passes (indicating the corresponding method invocation can be used) and false when it does’t. If the bind result is also represented in an expression tree, these can be combined easily like so: if ([restriction is true]) { [invoke cached method] } Take my example from my previous post: public class ClassA { public static void TestDynamic() { CallDynamic(new ClassA(), 10); CallDynamic(new ClassA(), "foo"); } public static void CallDynamic(dynamic d, object o) { d.Method(o); } public void Method(int i) {} public void Method(string s) {} } When the Method(int) method is first bound, along with an expression representing the result of the bind lookup, the C# binder will return the restrictions under which that bind can be reused. In this case, it can be reused if the types of the parameters are the same: if (thisArg.GetType() == typeof(ClassA) && arg1.GetType() == typeof(int)) { thisClassA.Method(i); } Caching callsite results So, now, it’s up to the callsite to link these expressions returned from the binder together in such a way that it can determine which one from the many it has cached it should use. This caching logic is all located in the System.Dynamic.UpdateDelegates class. It’ll help if you’ve got this type open in a decompiler to have a look yourself. For each callsite, there are 3 layers of caching involved: The last method invoked on the callsite. All methods that have ever been invoked on the callsite. All methods that have ever been invoked on any callsite of the same type. We’ll cover each of these layers in order Level 1 cache: the last method called on the callsite When a CallSite<T> object is first instantiated, the Target delegate field (containing the delegate that is called when the callsite is invoked) is set to one of the UpdateAndExecute generic methods in UpdateDelegates, corresponding to the number of parameters to the callsite, and the existance of any return value. These methods contain most of the caching, invoke, and binding logic for the callsite. The first time this method is invoked, the UpdateAndExecute method finds there aren’t any entries in the caches to reuse, and invokes the binder to resolve a new method. Once the callsite has the result from the binder, along with any restrictions, it stitches some extra expressions in, and replaces the Target field in the callsite with a compiled expression tree similar to this (in this example I’m assuming there’s no return value): if ([restriction is true]) { [invoke cached method] return; } if (callSite._match) { _match = false; return; } else { UpdateAndExecute(callSite, arg0, arg1, ...); } Woah. What’s going on here? Well, this resulting expression tree is actually the first level of caching. The Target field in the callsite, which contains the delegate to call when the callsite is invoked, is set to the above code compiled from the expression tree into IL, and then into native code by the JIT. This code checks whether the restrictions of the last method that was invoked on the callsite (the ‘primary’ method) match, and if so, executes that method straight away. This means that, the next time the callsite is invoked, the first code that executes is the restriction check, executing as native code! This makes this restriction check on the primary cached delegate very fast. But what if the restrictions don’t match? In that case, the second part of the stitched expression tree is executed. What this section should be doing is calling back into the UpdateAndExecute method again to resolve a new method. But it’s slightly more complicated than that. To understand why, we need to understand the second and third level caches. Level 2 cache: all methods that have ever been invoked on the callsite When a binder has returned the result of a lookup, as well as updating the Target field with a compiled expression tree, stitched together as above, the callsite puts the same compiled expression tree in an internal list of delegates, called the rules list. This list acts as the level 2 cache. Why use the same delegate? Stitching together expression trees is an expensive operation. You don’t want to do it every time the callsite is invoked. Ideally, you would create one expression tree from the binder’s result, compile it, and then use the resulting delegate everywhere in the callsite. But, if the same delegate is used to invoke the callsite in the first place, and in the caches, that means each delegate needs two modes of operation. An ‘invoke’ mode, for when the delegate is set as the value of the Target field, and a ‘match’ mode, used when UpdateAndExecute is searching for a method in the callsite’s cache. Only in the invoke mode would the delegate call back into UpdateAndExecute. In match mode, it would simply return without doing anything. This mode is controlled by the _match field in CallSite<T>. The first time the callsite is invoked, _match is false, and so the Target delegate is called in invoke mode. Then, if the initial restriction check fails, the Target delegate calls back into UpdateAndExecute. This method sets _match to true, then calls all the cached delegates in the rules list in match mode to try and find one that passes its restrictions, and invokes it. However, there needs to be some way for each cached delegate to inform UpdateAndExecute whether it passed its restrictions or not. To do this, as you can see above, it simply re-uses _match, and sets it to false if it did not pass the restrictions. This allows the code within each UpdateAndExecute method to check for cache matches like so: foreach (T cachedDelegate in Rules) { callSite._match = true; cachedDelegate(); // sets _match to false if restrictions do not pass if (callSite._match) { // passed restrictions, and the cached method was invoked // set this delegate as the primary target to invoke next time callSite.Target = cachedDelegate; return; } // no luck, try the next one... } Level 3 cache: all methods that have ever been invoked on any callsite with the same signature The reason for this cache should be clear – if a method has been invoked through a callsite in one place, then it is likely to be invoked on other callsites in the codebase with the same signature. Rather than living in the callsite, the ‘global’ cache for callsite delegates lives in the CallSiteBinder class, in the Cache field. This is a dictionary, typed on the callsite delegate signature, providing a RuleCache<T> instance for each delegate signature. This is accessed in the same way as the level 2 callsite cache, by the UpdateAndExecute methods. When a method is matched in the global cache, it is copied into the callsite and Target cache before being executed. Putting it all together So, how does this all fit together? Like so (I’ve omitted some implementation & performance details): That, in essence, is how the DLR performs its dynamic calls nearly as fast as statically compiled IL code. Extensive use of expression trees, compiled to IL and then into native code. Multiple levels of caching, the first of which executes immediately when the dynamic callsite is invoked. And a clever re-use of compiled expression trees that can be used in completely different contexts without being recompiled. All in all, a very fast and very clever reflection caching mechanism.

    Read the article

  • Possible to reverse order of ItemGroup elements?

    - by Filburt
    In MSBuild 3.5, is it possible to reverse the order elements in an ItemGroup? Example I have 2 projects. One can be built independently the other is dependent on the first. Each project references its specific items in a .targets file. project_A.targets <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <ItemGroup> <AssembliesToRemove Include="@(AssembliesToRemove)" /> <AssembliesToRemove Include="Assembly_A.dll"> <ApplicationName>App_A</ApplicationName> </AssembliesToRemove> </ItemGroup> <ItemGroup> <AssembliesToDeploy Include="@(AssembliesToDeploy)" /> <AssembliesToDeploy Include="Assembly_A.dll"> <AssemblyType>SomeType</AssemblyType> <ApplicationName>App_A</ApplicationName> </AssembliesToDeploy> </ItemGroup> </Project> project_B.targets <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <ItemGroup> <AssembliesToRemove Include="@(AssembliesToRemove)" /> <AssembliesToRemove Include="Assembly_B.dll"> <ApplicationName>App_B</ApplicationName> </AssembliesToRemove> </ItemGroup> <ItemGroup> <AssembliesToDeploy Include="@(AssembliesToDeploy)" /> <AssembliesToDeploy Include="Assembly_B.dll"> <AssemblyType>SomeType</AssemblyType> <ApplicationName>App_B</ApplicationName> </AssembliesToDeploy> </ItemGroup> </Project> project_A.proj <Project DefaultTargets="Start" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="project_A.targets" /> <Import Project="Common.targets" /> </Project> project_B.proj <Project DefaultTargets="Start" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="project_A.targets" /> <Import Project="project_B.targets" /> <Import Project="Common.targets" /> </Project> The Problem In this scenario the problem arises during the Task processing @(AssembliesToDeploy) because Assembly_B.dll needs to be deployed before Assembly_A.dll. What I tried to do I tried to influence the order of @(AssembliesToDeploy) by modifying project_B.targets like this: <ItemGroup> <AssembliesToDeploy Include="Assembly_B.dll"> <AssemblyType>SomeType</AssemblyType> <ApplicationName>App_B</ApplicationName> </AssembliesToDeploy> <AssembliesToDeploy Include="@(AssembliesToDeploy)" /> </ItemGroup> but when using project_B.targets inside project_B.proj the order inside @(AssembliesToDeploy) still remained Assembly_A.dll;Assembly_B.dll. Is there a solution which would allow to reuse my .targets i.e not copying all ItemGroup elements to all .targets files?

    Read the article

  • CodePlex Daily Summary for Thursday, June 12, 2014

    CodePlex Daily Summary for Thursday, June 12, 2014Popular ReleasesLuna ORM - Data Layer Code Generator for Vb.Net and C#: Luna 4.14.6.7: Newer version, better LunaEngine with many new feature. Any class implements IDisposable.StackBuilder: StackBuilder 1.0.21.0: *Issue #9 : Take weight of case into account while searching optimal case... *Issue #10 : Changing direction of last layer to use remaining space while stacking a palette... *Issue #11 : New URL for AutoUpdater section in app.config... First working version of INTEX Corp Plugin...QuickMon: Version 3.15: This release expand on 'Presets' - now called Templates. A complete Template editor has been added so you can manage templates(edit, save, import and export). Existing agent configs can be saved as a template for reuse as well. Additionally a few other fixes were done as well. 1. Fixed Registry collector - allow 'blank' key value (<default> values) 2. Default Monitor pack history size is now 100 3. qmp file extension is now the defaultCommand Line Media Controller: v1.0.0.0: Initial ReleaseVidCoder: 1.5.23 Beta: Added first translations for 1.5. Includes new languages Portuguese, Japanese, Chinese Simplified and Czech. Many of these translations are still incomplete: you can help out on Crowdin. Added an option to pass through an input track if it matches the output codec. Updated HandBrake core to SVN 6209. Fixed crash on AAC passthrough. Fixed x264 settings getting set incorrectly when reverting from a preset with the Advanced tab. Fixed occasional crash when calculating remaining time. ...PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.1.4: Added New-Folder and Remove-Folder functions (Thanks to SueH) Added NoWait parameter to Execute-Process Added the ability for Deploy-Application.exe to point to a different .ps1 file by specifying it on the command-line Added checks to Deploy-Application.exe to verify the AppDeployToolkit folder exists Added PSAppDeployToolkit icon to Deploy-Application.exe Fixed issue where hang could occur if file version was null when using Get-FileVersion Improved exception handling and loggin...CS-Script Source: Release v3.8.1: Improved ConfigConsole AdvancedShell extensions support cs-script.7z - CS-Script Suite (binaries, documentation, samples) cs-script.ExtensionPack.7z - CS-Script Extension Pack (additional binaries and samples) cs-scriptDocs.7z - CS-Script DocumentationProligence Orchard PowerShell: OrchardPs 0.2: Version 0.2, build 5273 Major Features: Support for Orchard 1.7 and 1.8 Support for managing Orchard themes Added new cmdlets: Get-OrchardFeature, Get-Tenant, Enable-Tenant, Disable-Tenant, Get-OrchardTheme, Enable-OrchardTheme Minor Features: Added 'gopc' alias for Get-OrchardPsCommand cmdlet Hidden 'Configuration' node in Orchard VFS, as it is not yet implemented Bug Fixes: Fixed signatures for *.ps1xml files 'FromAllTenants' switch no longer fails on tenants which are not running ...BugNET Issue Tracker: BugNET 1.6: Version 1.6 is a major upgrade to the latest frameworks and components by Microsoft. It includes a major UI overhaul using the bootstrap framework to have a modern, mobile friendly and easily customized layout. Upgrade to .NET 4.5 ASP.NET social auth Add script bundling and optimization Improvements for mobile devices Bootstrap (UI overhaul) Rewritten / friendly URL's Please read our release notes for BugNET 1.6: http://blog.bugnetproject.com/2014/06/08/bugnet-1-6-and-bugnet-pro-1...CppWindowsAPI: Library Build 20140608 [3.0]: There are many changes since the older versions before v3.0. Portable to the VS2013. Note that there is something still needed to implement so the relative components have been removed from this release, which will be re-implemented into the future releases. md5: 49bf91dbe4e47dee24d1f62a0b482334 sha-1: 4eabffc1b00557cdf8e83df18ac247e230f4c1d0 md5: 4a63a4923c23bcd821e7411453779086 sha-1: 1bb8549b92c8d791346cfe7efcdaeaa37098591d md5: 371026dfcac75e33f331430cf2c65415 sha-1: 7b8a155f30206f...SFDL.NET: SFDL.NET (2.2.9.3): Changelog: Retry Bugfix (Error Counter wurde nicht korrekt zurückgesetzt) Neue Einstellung: Retry Wartezeit ist nun EinstellbarLoad Runner - Distributed HTTP Pressure Test Tool: Load Runner 1.2: 1. added support for distributed load test (read documentation for details) 2. added detail documentationbabelua: 1.5.7.0: V1.5.7.0 - 2014.6.6Stability improvement: use "lua scripts folder" as lua search path when debugging;MyBB: MyBB 1.6.13 Türkçe Kurulum Paketi: MyBB 1.6.13 - Resmi Tam Türkçe UTF-8 Sifir Kurulum Paketi Rar Pass - Sifresi: mybb.com.tr Yayinci site: http://www.mybb.com.tr Orjinal indirme adresi: http://download.mybb.com.tr Destek 1: http://destek.mybb.com.tr/showthread.php?tid=12326 Destek 2: http://tr.mybbdepo.com/mybb-1-6-13-turkce-sifir-kurulum-paketi-konusu.html Dosya: MyBB1613KurulumPaketiTR.rar MD5: d2745b79aa8cc5f8cc09e8a91b50d3cc SHA-1: 6fbb8e611e6d78f8cd3d88d7a773edc67ecc2a3e Ekleyen: XpSerkan MCTR TEAM - Gururla Sunar.SEToolbox: SEToolbox 01.033.007 Release 1: Fixed breaking changes in Space Engineers in latest update. Installation of this version will replace older version.Virto Commerce Enterprise Open Source eCommerce Platform (asp.net mvc): Virto Commerce 1.10: Virto Commerce Community Edition version 1.10. To install the SDK package, please refer to SDK getting started documentation To configure source code package, please refer to Source code getting started documentation This release includes bug fixes and improvements (including Commerce Manager localization and https support). More details about this release can be found on our blog at http://blog.virtocommerce.com.Fantom - Expression Based Dynamic Language Manager: Fantom Binary and Test Web Site: The first version of FantomDLL. Download binary and sample site for testing.Back Up Your SharePoint: SPSBackUP 0.1: Supports SharePoint 2010 and 2013 All settings with xml input file Clean backup history Notifications by mail Log script results in rtf fileNPOI: NPOI 2.1: Assembly Version: 2.1.0 New Features a. XSSFSheet.CopySheet b. Excel2Html for XSSF c. insert picture in word 2007 d. Implement IfError function in formula engine Bug Fixes a. fix conditional formatting issue b. fix ctFont order issue c. fix vertical alignment issue in XSSF d. add IndexedColors to NPOI.SS.UserModel e. fix decimal point issue in non-English culture f. fix SetMargin issue in XSSF g.fix multiple images insert issue in XSSF h.fix rich text style missing issue in XSSF i. fix cell...Media Player (Technology Area): Media Player 0.3 (Codname Kalo): Themes are included, along with a picture viewer. This does not require 0.2 to be installedNew ProjectsAddress Auto lookup For MS CRM 2013: Address Lookup to find correct addresses in less time.Aspose for Spring.Java: Aspose for Spring Java provides source code and detailed usage instructions for extending the existing Spring.Java samples.Beatting Mole XNA Game on Windows Phone: The project game using XNA Game Engine in Windows PhoneBrecham.Obex: The library provides very broad OBEX support, providing not just the ‘Put’ operation , but also: Connect, Put, Get, SetPath, Delete, and Abort.Command Line Media Controller: This application provides a command line interface for issuing basic media controls to most media player applications.DevExpress Prism Adapters: DevExpress Prism Adapters is a project intented to provide different adapters for DevExpress controls integration with Microsoft Prism.FVSB: fvsbH Logger (8 logger): Create logs with a html format with a Visual Studio 2013 style.Masked Input For CRM 2013: Mask Input for more user friendliness and to avoid user typo errors.MovieDatabase: A movie database manipulation program that uses the movie database api.OpenNetworkTester: GUI-based bandwidth testing toolProgress Tracker: A simple pet project that allows users to set up daily tasks and track which ones they complete.ProjectShipping: summarySerXio: tfs, read tfs dataValu Akunting: Sistem Informasi Akuntansi Valudata KomputindoWindows Phone 8.1 Training for MIC15: Using for store training source code and resource url/files/.??????-??????【??】: ????????????????????,??????????,????????、????,??????????,??????????。??????-??????【??】: ??????????????????????????????,??????????,????,????,???,??????。??????-??????【??】: ????????????????,?????????????? ??。????????、????、????、?????????? ???????。????-????【??】: ??????,??????:?????,?????,??????,??????????,????????。????????!????-????【??】: ?????????????????????:????、????、??????????????,????????。????????!??????-??????【??】: ???????????,?????,???????????,???????,????,????,????,?????。??????-??????【??】: ????????????、???、??、??????????????????????????????,????????????????!????-????【??】: ???????????????????????????、????、????、???????????,????,????!?????-?????【??】: ???????????????8?,????????,????????,??????????,?????,????? ,????????!?????-?????【??】: ???????????????6?,???????????????????????????,??????????????,?????????!????-????【??】: ????????????????????????????,???????????????????????,???????。??????-??????【??】: ????????????????、??????、??????、??????、??????、?????、??????、????、????、????????!?????-?????【??】: ???????????????,????????,????:???????,??????,????,????,?????,?????,??????!?????-?????【??】: ????????????????,?????????????。?????????,???????,???????,?????????????。??????-??????【??】: ?????????????????,??????????????、???????、???????、???????、?????!??????-??????【??】: ????????????????????,????????????????,????????????????,??????!????-????【??】: ???????????????,?????????????。????????????,???????,???????,?????,?????。?????-?????【??】: ?????????????????????????,???????????,????????,?????????????????????。?????-?????【??】: ?????????????????????,????“???????,???????”?????,????????????!??????-??????【??】: ??????????????????????????,????,????,??????????。???????????????,??,??,??????????,??????...??????-??????【??】: ???????????????????,?????????????,????,?????????,?????????????,?????,?????!????-????【??】: ??????????????,?????????????? ??。????????、????、????、?????????? ???????。?????-?????【??】: ????????????????,???????????????。???????????,??????:????、????、???????!?????-?????【??】: ?????????????????,???????????,??????????????,??????????,??????????????!??????-??????【??】: ????????????,????,??????????? ???? ???? ?????????,???,??,?????!??????-??????【??】: ????????????????、????、????、??????、????、????????,????????、?????????,?????。????-????【??】: ????????????????:?????? ???? ??????,???????,??????,???????。?????-?????【??】: ????????????????,?????????????。????????????,???????,???????,?????,?????。?????-?????【??】: ?????????????????????????,???????????,????????,???????????????????????????-??????【??】: ???????????????????,?????????????,????,?????????,?????????????,?????,?????!??????-??????【??】: ?????????????????,???????????????。???????????,??????:????、????、???????!????-????【??】: ????????????????????????,????,????,??????????。???????????????,??,??,??????????,??????...?????-?????【??】: ?????????????????,???????????,??????????????,??????????,???????????????????-?????【??】: ?????????????????、?????、?????、????、?????,??????????。????????????????!??????-??????【??】: ????????????????????,????????????????,????????????????,??????!??????-??????【??】: ??????????????????????,????“???????,???????”?????,????????????!????-????【??】: ???????????????,??????????????、???????、???????、???????、?????!?????-?????【??】: ???????????????,????,????,??????,????“????、????、????、????”????????,??????.?????-?????【??】: ???????????????????,????????:??、??、???,?????????????????????!??????-??????【??】: ????????,??????:?????,?????,??????,??????????,????????。????????!??????-??????【??】: ???????????????????????:????、????、??????????????,????????。????????!??????-??????【??】: ?????????????????????????????,??????????,????,????,???,??????????-????【??】: ????????????????、????,??100%????,??????,????????????,???????????!????-????【??】: ??????????????????,??????????,????????、????,??????????,??????????。??????-??????【??】: ??????????????,????????,?????,???,???????????,???????????,?????,??????!??????-??????【??】: ??????????????????????????????????:???????,??????,????,????,????,?????!????-????【??】: ???????????????????,?????????/?,,???????????,??????????????!?????-?????【??】: ???????????????????????,????,????,????,???????,?????,?????.??????。?????-?????【??】: ???????????????、?????,????????????????????,????,????,??????。??????-??????【??】: ????????????????????,???????????????????????????,????:????,????,????,?????。?????-?????【??】: ???????????????????、????????、????????、????????、???????,????????????。?????-?????【??】: ????????????????,???????,??????。??????????,????,????,?????,????????????。???????-???????【??】: ??????????????????,??????,????,?????????????,????????,??????,????,?????...???????-???????【??】: ?????????????? , ???? ?????,??????????????,????,????,??????。?????-?????【??】: ???????????????????,??????????????,???????????,??????,????,??????,??????。??????-??????【??】: ????????????????,??????????????????????:???????,??????,????,????,????,????,????,???????!??????-??????【??】: ??????????????!???????????,??????,????,?????????????,????????,??????,????,?????...??????-??????【??】: ??????????????????、?????、?????、????、?????,??????????。????????????????!??????-??????【??】: ??????????????????????,????????,??????????,?????,????? ,????????!??????-??????【??】: ?????????????????,??????????、??????,??????????、????、????、???????。????-????【??】: ???????????、??????????????????,????????,?????,??????,????,????,????!?????-?????【??】: ?????????????????????,???????????????,???????,?????,?????,????? !!!?????-?????【??】: ?????????????????????,???????????????,????????????????????!???????-???????【??】: ????????????????????,??????????????????,?????????????。?????????,????????????。??????-??????【??】: ??????????????????,????,????,?????????.?????,?????!?????,?????????、??、??!??????-??????【??】: ???????????????????,?????????????,?????????????,??????,????,????,??????????!??????-??????【??】: ?????????????????,??????????、??????,??????????、????、????、???????。??????-??????【??】: ??????????????????????,???????????????,???????,?????,?????,????? !!!????-????【??】: ????????????????????,????????,??????????,?????,????? ,????????!?????-?????【??】: ?????????????????????,???????????????,????????????????????!?????-?????【??】: ?????????????????、????,??100%????,??????,????????????,???????????!??????-??????【??】: ??????????????????????????????,???????????????????????,???????。??????-??????【??】: ??????????????????:?????? ???? ??????,???????,??????,???????。????-????【??】: ????????????????、????、??????、????????,????????????,???????????!?????-?????【??】: ???????????,????,??????????? ???? ???? ?????????,???,??,?????!?????-?????【??】: ???????????????、????、????、??????、????、????????,????????、?????????,?????。??????-??????【??】: ?????????????:??????、????、????、????、????、??????、??????,???????!??????-??????【??】: ??????????????????,????,??.??.??.??.??.??.??.???,????,???????!????-????【??】: ??????????????,???????、???????????,????????、????、????、??????、????????。?????-?????【??】: ????????????????????????????,???????????????,????????、????、????、????、?????????,???????、???、???,??????????????????????,????????????、???????、??????????;???????????????-?????【??】: ???????????【.????.????.????.????.】??【??】:、??、??、??、??、??、??、??、??、??、??、???????????-??????【??】: ??????????????????、?????、?????、?????、?????、????,???????????,?????,??????!??????-??????【??】: ??????????????????,???、???!???????,????????????????,????????????,???!????-????【??】: ?????????????????????,????????????,?????、??、????,?????,???????????-?????【??】: ????????????????、????、????、??????????,???,?????,???????????????.?????-?????【??】: ????????????????????,?????????、??、??、????,??????????,?????????????!

    Read the article

  • Recommended integration mechanism for bi-directional, authenticated, encrypted connection in C clien

    - by rcampbell
    Let me first give an example. Imagine you have a single server running a JVM application. This server keeps a collection of N equations, once for each client: Client #1: 2x Client #2: 1 + y Client #3: z/4 This server includes an HTTP interface so that random visitors can type https://www.acme.com/client/3 int their browsers and see the latest evaluated result of z/4. The tricky part is that either the client or the server may change the variable value at any time, informing the other party immediately. More specifically, Client #3 - a C app - can initially tell the server that z = 20. An hour later that same client informs the server that z = 23. Likewise the server can later inform the client that z = 28. As caf pointed out in the comments, there can be a race condition when values are changed by the client and server simultaneously. The solution would be for both client and server to send the operation performed in their message, which would need to be executed by the other party. To keep things simple, let's limit the operations to (commutative) addition, allowing us to disregard message ordering. For example, the client seeds the server with z = 20: server:z=20, client:z=20 server sends {+3} message (so z=23 locally) & client sends {-2} message (so z=18 locally) at the exact same time server receives {-2} message at some point, adds to his local copy so z=21 client receives {+3} message at some point, adds to his local copy so z=21 As long as all messages are eventually evaluated by both parties, the correct answer will eventually be given to the users of the client and server since we limited ourselves to commutative operations (addition of 3 and -2). This does mean that both client and server can be returning incorrect answers in the time it takes for messages to be exchanged and processed. While undesirable, I believe this is unavoidable. Some possible implementations of this idea include: Open an encrypted, always on TCP socket connection for communication Pros: no additional infrastructure needed, client and server know immediately if there is a problem (disconnect) with the other party, fairly straightforward (except the the encryption), native support from both JVM and C platforms Cons: pretty low-level so you end up writing a lot yourself (protocol, delivery verification, retry-on-failure logic), probably have a lot of firewall headaches during client app installation Asynchronous messaging (ex: ActiveMQ) Pros: transactional, both C & Java integration, free up the client and server apps from needing retry logic or delivery verification, pretty straightforward encryption, easy extensibility via message filters/routers/etc Cons: need additional infrastructure (message server) which must never fail, Database or file system as asynchronous integration point Same pros/cons as above but messier RESTful Web Service Pros: simple, possible reuse of the server's existing REST API, SSL figures out the encryption problem for you (maybe use RSA key a la GitHub for authentication?) Cons: Client now needs to run a C HTTP REST server w/SSL, client and server need retry logic. Axis2 has both a Java and C version, but you may be limited to SOAP. What other techniques should I be evaluating? What real world experiences have you had with these mechanisms? Which do you recommend for this problem and why?

    Read the article

  • Perl MiniWebserver

    - by snikolov
    hey guys i have config this miniwebserver, however i require the server to download a file in the local directory i am getting a problem can you please fix my issue thanks !/usr/bin/perl use strict; use Socket; use IO::Socket; my $buffer; my $file; my $length; my $output; Simple web server in Perl Serves out .html files, echos form data sub parse_form { my $data = $_[0]; my %data; foreach (split /&/, $data) { my ($key, $val) = split /=/; $val =~ s/+/ /g; $val =~ s/%(..)/chr(hex($1))/eg; $data{$key} = $val;} return %data; } Setup and create socket my $port = shift; defined($port) or die "Usage: $0 portno\n"; my $DOCUMENT_ROOT = $ENV{'HOME'} . "public"; my $server = new IO::Socket::INET(Proto = 'tcp', LocalPort = $port, Listen = SOMAXCONN, Reuse = 1); $server or die "Unable to create server socket: $!" ; Await requests and handle them as they arrive while (my $client = $server-accept()) { $client-autoflush(1); my %request = (); my %data; { -------- Read Request --------------- local $/ = Socket::CRLF; while (<$client>) { chomp; # Main http request if (/\s*(\w+)\s*([^\s]+)\s*HTTP\/(\d.\d)/) { $request{METHOD} = uc $1; $request{URL} = $2; $request{HTTP_VERSION} = $3; } # Standard headers elsif (/:/) { (my $type, my $val) = split /:/, $_, 2; $type =~ s/^\s+//; foreach ($type, $val) { s/^\s+//; s/\s+$//; } $request{lc $type} = $val; } # POST data elsif (/^$/) { read($client, $request{CONTENT}, $request{'content-length'}) if defined $request{'content-length'}; last; } } } -------- SORT OUT METHOD --------------- if ($request{METHOD} eq 'GET') { if ($request{URL} =~ /(.*)\?(.*)/) { $request{URL} = $1; $request{CONTENT} = $2; %data = parse_form($request{CONTENT}); } else { %data = (); } $data{"_method"} = "GET"; } elsif ($request{METHOD} eq 'POST') { %data = parse_form($request{CONTENT}); $data{"_method"} = "POST"; } else { $data{"_method"} = "ERROR"; } ------- Serve file ---------------------- my $localfile = $DOCUMENT_ROOT.$request{URL}; Send Response if (open(FILE, "<$localfile")) { print $client "HTTP/1.0 200 OK", Socket::CRLF; print $client "Content-type: text/html", Socket::CRLF; print $client Socket::CRLF; my $buffer; while (read(FILE, $buffer, 4096)) { print $client $buffer; } $data{"_status"} = "200"; } else { $file = 'a.pl'; open(INFILE, $file); while (<INFILE>) { $output .= $_; ##output of the file } $length = length($output); print $client "'HTTP/1.0 200 OK", Socket::CRLF; print $client "Content-type: application/octet-stream", Socket::CRLF; print $client "Content-Length:".$length, Socket::CRLF; print $client "Accept-Ranges: bytes", Socket::CRLF; print $client 'Content-Disposition: attachment; filename="test.zip"', Socket::CRLF; print $client $output, Socket::CRLF; print $client 'Content-Transfer-Encoding: binary"', Socket::CRLF; print $client "Connection: Keep-Alive", Socket::CRLF; # #$data{"_status"} = "404"; # } close(FILE); Log Request print ($DOCUMENT_ROOT.$request{URL},"\n"); foreach (keys(%data)) { print (" $_ = $data{$_}\n"); } ----------- Close Connection and loop ------------------ close $client; } END

    Read the article

  • mySQL to .XSL help

    - by kielie
    hi guys, I have to create a script that takes a mySQL table, and exports it into .XSL format, and then saves that file into a specified folder on the web host. I got it working, but now I can't seem to get it to automatically save the file to the location without prompting the user. It needs to run every day at a specified time, so it can save the previous days leads into a .XSL file on the web host. Here is the code: <?php // DB TABLE Exporter // // How to use: // // Place this file in a safe place, edit the info just below here // browse to the file, enjoy! // CHANGE THIS STUFF FOR WHAT YOU NEED TO DO $dbhost = "-"; $dbuser = "-"; $dbpass = "-"; $dbname = "-"; $dbtable = "-"; // END CHANGING STUFF $cdate = date("Y-m-d"); // get current date // first thing that we are going to do is make some functions for writing out // and excel file. These functions do some hex writing and to be honest I got // them from some where else but hey it works so I am not going to question it // just reuse // This one makes the beginning of the xls file function xlsBOF() { echo pack("ssssss", 0x809, 0x8, 0x0, 0x10, 0x0, 0x0); return; } // This one makes the end of the xls file function xlsEOF() { echo pack("ss", 0x0A, 0x00); return; } // this will write text in the cell you specify function xlsWriteLabel($Row, $Col, $Value ) { $L = strlen($Value); echo pack("ssssss", 0x204, 8 + $L, $Row, $Col, 0x0, $L); echo $Value; return; } // make the connection an DB query $dbc = mysql_connect( $dbhost , $dbuser , $dbpass ) or die( mysql_error() ); mysql_select_db( $dbname ); $q = "SELECT * FROM ".$dbtable." WHERE date ='$cdate'"; $qr = mysql_query( $q ) or die( mysql_error() ); // Ok now we are going to send some headers so that this // thing that we are going make comes out of browser // as an xls file. // header("Pragma: public"); header("Expires: 0"); header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header("Content-Type: application/force-download"); header("Content-Type: application/octet-stream"); header("Content-Type: application/download"); //this line is important its makes the file name header("Content-Disposition: attachment;filename=export_".$dbtable.".xls "); header("Content-Transfer-Encoding: binary "); // start the file xlsBOF(); // these will be used for keeping things in order. $col = 0; $row = 0; // This tells us that we are on the first row $first = true; while( $qrow = mysql_fetch_assoc( $qr ) ) { // Ok we are on the first row // lets make some headers of sorts if( $first ) { foreach( $qrow as $k => $v ) { // take the key and make label // make it uppper case and replace _ with ' ' xlsWriteLabel( $row, $col, strtoupper( ereg_replace( "_" , " " , $k ) ) ); $col++; } // prepare for the first real data row $col = 0; $row++; $first = false; } // go through the data foreach( $qrow as $k => $v ) { // write it out xlsWriteLabel( $row, $col, $v ); $col++; } // reset col and goto next row $col = 0; $row++; } xlsEOF(); exit(); ?> I tried using, fwrite to accomplish this, but it didn't seem to go very well, I removed the header information too, but nothing worked. Here is the original code, as I found it, any help would be greatly appreciated. :-) Thanx in advance. :-)

    Read the article

  • Read files from directory to create a ZIP hadoop

    - by Félix
    I'm looking for hadoop examples, something more complex than the wordcount example. What I want to do It's read the files in a directory in hadoop and get a zip, so I have thought to collect al the files in the map class and create the zip file in the reduce class. Can anyone give me a link to a tutorial or example than can help me to built it? I don't want anyone to do this for me, i'm asking for a link with better examples than the wordaccount. This is what I have, maybe it's useful for someone public class Testing { private static class MapClass extends MapReduceBase implements Mapper<LongWritable, Text, Text, BytesWritable> { // reuse objects to save overhead of object creation Logger log = Logger.getLogger("log_file"); public void map(LongWritable key, Text value, OutputCollector<Text, BytesWritable> output, Reporter reporter) throws IOException { String line = ((Text) value).toString(); log.info("Doing something ... " + line); BytesWritable b = new BytesWritable(); b.set(value.toString().getBytes() , 0, value.toString().getBytes() .length); output.collect(value, b); } } private static class ReduceClass extends MapReduceBase implements Reducer<Text, BytesWritable, Text, BytesWritable> { Logger log = Logger.getLogger("log_file"); ByteArrayOutputStream bout; ZipOutputStream out; @Override public void configure(JobConf job) { super.configure(job); log.setLevel(Level.INFO); bout = new ByteArrayOutputStream(); out = new ZipOutputStream(bout); } public void reduce(Text key, Iterator<BytesWritable> values, OutputCollector<Text, BytesWritable> output, Reporter reporter) throws IOException { while (values.hasNext()) { byte[] data = values.next().getBytes(); ZipEntry entry = new ZipEntry("entry"); out.putNextEntry(entry); out.write(data); out.closeEntry(); } BytesWritable b = new BytesWritable(); b.set(bout.toByteArray(), 0, bout.size()); output.collect(key, b); } @Override public void close() throws IOException { // TODO Auto-generated method stub super.close(); out.close(); } } /** * Runs the demo. */ public static void main(String[] args) throws IOException { int mapTasks = 20; int reduceTasks = 1; JobConf conf = new JobConf(Prue.class); conf.setJobName("testing"); conf.setNumMapTasks(mapTasks); conf.setNumReduceTasks(reduceTasks); MultipleInputs.addInputPath(conf, new Path("/messages"), TextInputFormat.class, MapClass.class); conf.setOutputKeyClass(Text.class); conf.setOutputValueClass(BytesWritable.class); FileOutputFormat.setOutputPath(conf, new Path("/czip")); conf.setMapperClass(MapClass.class); conf.setCombinerClass(ReduceClass.class); conf.setReducerClass(ReduceClass.class); // Delete the output directory if it exists already JobClient.runJob(conf); } }

    Read the article

  • How to Implement Project Type "Copy", "Move", "Rename", and "Delete"

    - by Geertjan
    You've followed the NetBeans Project Type Tutorial and now you'd like to let the user copy, move, rename, and delete the projects conforming to your project type. When they right-click a project, they should see the relevant menu items and those menu items should provide dialogs for user interaction, followed by event handling code to deal with the current operation. Right now, at the end of the tutorial, the "Copy" and "Delete" menu items are present but disabled, while the "Move" and "Rename" menu items are absent: The NetBeans Project API provides a built-in mechanism out of the box that you can leverage for project-level "Copy", "Move", "Rename", and "Delete" actions. All the functionality is there for you to use, while all that you need to do is a bit of enablement and configuration, which is described below. To get started, read the following from the NetBeans Project API: http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/ActionProvider.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/CopyOperationImplementation.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/MoveOrRenameOperationImplementation.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/DeleteOperationImplementation.html Now, let's do some work. For each of the menu items we're interested in, we need to do the following: Provide enablement and invocation handling in an ActionProvider implementation. Provide appropriate OperationImplementation classes. Add the new classes to the Project Lookup. Make the Actions visible on the Project Node. Run the application and verify the Actions work as you'd like. Here we go: Create an ActionProvider. Here you specify the Actions that should be supported, the conditions under which they should be enabled, and what should happen when they're invoked, using lots of default code that lets you reuse the functionality provided by the NetBeans Project API: class CustomerActionProvider implements ActionProvider { @Override public String[] getSupportedActions() { return new String[]{ ActionProvider.COMMAND_RENAME, ActionProvider.COMMAND_MOVE, ActionProvider.COMMAND_COPY, ActionProvider.COMMAND_DELETE }; } @Override public void invokeAction(String string, Lookup lkp) throws IllegalArgumentException { if (string.equalsIgnoreCase(ActionProvider.COMMAND_RENAME)) { DefaultProjectOperations.performDefaultRenameOperation( CustomerProject.this, ""); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_MOVE)) { DefaultProjectOperations.performDefaultMoveOperation( CustomerProject.this); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_COPY)) { DefaultProjectOperations.performDefaultCopyOperation( CustomerProject.this); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_DELETE)) { DefaultProjectOperations.performDefaultDeleteOperation( CustomerProject.this); } } @Override public boolean isActionEnabled(String command, Lookup lookup) throws IllegalArgumentException { if ((command.equals(ActionProvider.COMMAND_RENAME))) { return true; } else if ((command.equals(ActionProvider.COMMAND_MOVE))) { return true; } else if ((command.equals(ActionProvider.COMMAND_COPY))) { return true; } else if ((command.equals(ActionProvider.COMMAND_DELETE))) { return true; } return false; } } Importantly, to round off this step, add "new CustomerActionProvider()" to the "getLookup" method of the project. If you were to run the application right now, all the Actions we're interested in would be enabled (if they are visible, as described in step 4 below) but when you invoke any of them you'd get an error message because each of the DefaultProjectOperations above looks in the Lookup of the Project for the presence of an implementation of a class for handling the operation. That's what we're going to do in the next step. Provide Implementations of Project Operations. For each of our operations, the NetBeans Project API lets you implement classes to handle the operation. The dialogs for interacting with the project are provided by the NetBeans project system, but what happens with the folders and files during the operation can be influenced via the operations. Below are the simplest possible implementations, i.e., here we assume we want nothing special to happen. Each of the below needs to be in the Lookup of the Project in order for the operation invocation to succeed. private final class CustomerProjectMoveOrRenameOperation implements MoveOrRenameOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyRenaming() throws IOException { } @Override public void notifyRenamed(String nueName) throws IOException { } @Override public void notifyMoving() throws IOException { } @Override public void notifyMoved(Project original, File originalPath, String nueName) throws IOException { } } private final class CustomerProjectCopyOperation implements CopyOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyCopying() throws IOException { } @Override public void notifyCopied(Project prjct, File file, String string) throws IOException { } } private final class CustomerProjectDeleteOperation implements DeleteOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyDeleting() throws IOException { } @Override public void notifyDeleted() throws IOException { } } Also make sure to put the above methods into the Project Lookup. Check the Lookup of the Project. The "getLookup()" method of the project should now include the classes you created above, as shown in bold below: @Override public Lookup getLookup() { if (lkp == null) { lkp = Lookups.fixed(new Object[]{ this, new Info(), new CustomerProjectLogicalView(this), new CustomerCustomizerProvider(this), new CustomerActionProvider(), new CustomerProjectMoveOrRenameOperation(), new CustomerProjectCopyOperation(), new CustomerProjectDeleteOperation(), new ReportsSubprojectProvider(this), }); } return lkp; } Make Actions Visible on the Project Node. The NetBeans Project API gives you a number of CommonProjectActions, including for the actions we're dealing with. Make sure the items in bold below are in the "getActions" method of the project node: @Override public Action[] getActions(boolean arg0) { return new Action[]{ CommonProjectActions.newFileAction(), CommonProjectActions.copyProjectAction(), CommonProjectActions.moveProjectAction(), CommonProjectActions.renameProjectAction(), CommonProjectActions.deleteProjectAction(), CommonProjectActions.customizeProjectAction(), CommonProjectActions.closeProjectAction() }; } Run the Application. When you run the application, you should see this: Let's now try out the various actions: Copy. When you invoke the Copy action, you'll see the dialog below. Provide a new project name and location and then the copy action is performed when the Copy button is clicked below: The message you see above, in red, might not be relevant to your project type. When you right-click the application and choose Branding, you can find the string in the Resource Bundles tab, as shown below: However, note that the message will be shown in red, no matter what the text is, hence you can really only put something like a warning message there. If you have no text at all, it will also look odd.If the project has subprojects, the copy operation will not automatically copy the subprojects. Take a look here and here for similar more complex scenarios. Move. When you invoke the Move action, the dialog below is shown: Rename. The Rename Project dialog below is shown when you invoke the Rename action: I tried it and both the display name and the folder on disk are changed. Delete. When you invoke the Delete action, you'll see this dialog: The checkbox is not checkable, in the default scenario, and when the dialog above is confirmed, the project is simply closed, i.e., the node hierarchy is removed from the application. However, if you truly want to let the user delete the project on disk, pass the Project to the DeleteOperationImplementation and then add the children of the Project you want to delete to the getDataFiles method: private final class CustomerProjectDeleteOperation implements DeleteOperationImplementation { private final CustomerProject project; private CustomerProjectDeleteOperation(CustomerProject project) { this.project = project; } @Override public List<FileObject> getDataFiles() { List<FileObject> files = new ArrayList<FileObject>(); FileObject[] projectChildren = project.getProjectDirectory().getChildren(); for (FileObject fileObject : projectChildren) { addFile(project.getProjectDirectory(), fileObject.getNameExt(), files); } return files; } private void addFile(FileObject projectDirectory, String fileName, List<FileObject> result) { FileObject file = projectDirectory.getFileObject(fileName); if (file != null) { result.add(file); } } @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyDeleting() throws IOException { } @Override public void notifyDeleted() throws IOException { } } Now the user will be able to check the checkbox, causing the method above to be called in the DeleteOperationImplementation: Hope this answers some questions or at least gets the discussion started. Before asking questions about this topic, please take the steps above and only then attempt to apply them to your own scenario. Useful implementations to look at: http://kickjava.com/src/org/netbeans/modules/j2ee/clientproject/AppClientProjectOperations.java.htm https://kenai.com/projects/nbandroid/sources/mercurial/content/project/src/org/netbeans/modules/android/project/AndroidProjectOperations.java

    Read the article

  • perl : Passing hash , array through socket program betwen client and server

    - by pavun_cool
    Hi All . In sockets I have written the client server program . First I tried to send the normal string among them it sends fine . After that I am trying to send the hash and array values from client to server and server to client . When I printing the values using Dumper . It is giving me only reference . What Should I do for getting accessing the actual values in client server . Server Program: use IO::Socket; use strict; use warnings; my %hash = ( "name" => "pavunkumar " , "age" => 20 ) ; my $new = \%hash ; #Turn on System variable for Buffering output $| = 1; # Creating a a new socket my $socket= IO::Socket::INET->new(LocalPort=>5000,Proto=>'tcp',Localhost => 'localhost','Listen' => 5 , 'Reuse' => 1 ); die "could not create $! \n" unless ( $socket ); print "\nUDPServer Waiting port 5000\n"; my $new_sock = $socket->accept(); my $host = $new_sock->peerhost(); while(<$new_sock>) { #my $line = <$new_sock>; print Dumper "$host $_"; print $new_sock $new . "\n"; } print "$host is closed \n" ; Client Program use IO::Socket; use Data::Dumper ; use warnings ; use strict ; my %hash = ( "file" =>"log.txt" , size => "1000kb") ; my $ref = \%hash ; # This client for connecting the specified below address and port # INET function will create the socket file and establish the connection with # server my $port = shift || 5000 ; my $host = shift || 'localhost'; my $recv_data ; my $send_data; my $socket = new IO::Socket::INET ( PeerAddr => $host , PeerPort => $port , Proto => 'tcp', ) or die "Couldn't connect to Server\n"; while (1) { my $line = <stdin> ; print $socket $ref."\n"; if ( $line = <$socket> ) { print Dumper $line ; } else { print "Server is closed \n"; last ; } } I have given my sample program about what I am doing , Can any one tell me what I am doing wrong in this code. And what I need to do for accessing the hash values . Thanks in Advance

    Read the article

  • java ioexception error=24 too many files open

    - by MattS
    I'm writing a genetic algorithm that needs to read/write lots of files. The fitness test for the GA is invoking a program called gradif, which takes a file as input and produces a file as output. Everything is working except when I make the population size and/or the total number of generations of the genetic algorithm too large. Then, after so many generations, I start getting this: java.io.FileNotFoundException: testfiles/GradifOut29 (Too many open files). (I get it repeatedly for many different files, the index 29 was just the one that came up first last time I ran it). It's strange because I'm not getting the error after the first or second generation, but after a significant amount of generations, which would suggest that each generation opens up more files that it doesn't close. But as far as I can tell I'm closing all of the files. The way the code is set up is the main() function is in the Population class, and the Population class contains an array of Individuals. Here's my code: Initial creation of input files (they're random access so that I could reuse the same file across multiple generations) files = new RandomAccessFile[popSize]; for(int i=0; i<popSize; i++){ files[i] = new RandomAccessFile("testfiles/GradifIn"+i, "rw"); } At the end of the entire program: for(int i=0; i<individuals.length; i++){ files[i].close(); } Inside the Individual's fitness test: FileInputStream fin = new FileInputStream("testfiles/GradifIn"+index); FileOutputStream fout = new FileOutputStream("testfiles/GradifOut"+index); Process process = Runtime.getRuntime().exec ("./gradif"); OutputStream stdin = process.getOutputStream(); InputStream stdout = process.getInputStream(); Then, later.... try{ fin.close(); fout.close(); stdin.close(); stdout.close(); process.getErrorStream().close(); }catch (IOException ioe){ ioe.printStackTrace(); } Then, afterwards, I append an 'END' to the files to make parsing them easier. FileWriter writer = new FileWriter("testfiles/GradifOut"+index, true); writer.write("END"); try{ writer.close(); }catch(IOException ioe){ ioe.printStackTrace(); } My redirection of stdin and stdout for gradif are from this answer. I tried using the try{close()}catch{} syntax to see if there was a problem with closing any of the files (there wasn't), and I got that from this answer. It should also be noted that the Individuals' fitness tests run concurrently. UPDATE: I've actually been able to narrow it down to the exec() call. In my most recent run, I first ran in to trouble at generation 733 (with a population size of 100). Why are the earlier generations fine? I don't understand why, if there's no leaking, the algorithm should be able to pass earlier generations but fail on later generations. And if there is leaking, then where is it coming from? UPDATE2: In trying to figure out what's going on here, I would like to be able to see (preferably in real-time) how many files the JVM has open at any given point. Is there an easy way to do that?

    Read the article

  • Perl kill(0, $pid) in Windows always returning 1

    - by banshee_walk_sly
    I'm trying to make a Perl script that will run a set of other programs in Windows. I need to be able to capture the stdout, stderr, and exit code of the process, and I need to be able to see if a process exceeds it's allotted execution time. Right now, the pertinent part of my code looks like: ... $pid = open3($wtr, $stdout, $stderr, $command); if($time < 0){ waitpid($pid, 0); $return = $? >> 8; $death_sig = $? & 127; $core_dump = $? & 128; } else{ # Do timeout stuff, currently not working as planned print "pid: $pid\n"; my $elapsed = 0; #THIS LOOP ONLY TERMINATES WHEN $time > $elapsed ...? while(kill 0, $pid and $time > $elapsed){ Time::HiRes::usleep(1000); # sleep for milliseconds $elapsed += 1; $return = $? >> 8; $death_sig = $? & 127; $core_dump = $? & 128; } if($elapsed >= $time){ $status = "FAIL"; print $log "TIME LIMIT EXCEEDED\n"; } } #these lines are needed to grab the stdout and stderr in arrays so # I may reuse them in multiple logs if(fileno $stdout){ @stdout = <$stdout>; } if(fileno $stderr){ @stderr = <$stderr>; } ... Everything is working correctly if $time = -1 (no timeout is needed), but the system thinks that kill 0, $pid is always 1. This makes my loop run for the entirety of the time allowed. Some extra details just for clarity: This is being run on Windows. I know my process does terminate because I have get all the expected output. Perl version: This is perl, v5.10.1 built for MSWin32-x86-multi-thread (with 2 registered patches, see perl -V for more detail) Copyright 1987-2009, Larry Wall Binary build 1007 [291969] provided by ActiveState http://www.ActiveState.com Built Jan 26 2010 23:15:11 I appreciate your help :D For that future person who may have a similar issue I got the code to work, here is the modified code sections: $pid = open3($wtr, $stdout, $stderr, $command); close($wtr); if($time < 0){ waitpid($pid, 0); } else{ print "pid: $pid\n"; my $elapsed = 0; while(waitpid($pid, WNOHANG) <= 0 and $time > $elapsed){ Time::HiRes::usleep(1000); # sleep for milliseconds $elapsed += 1; } if($elapsed >= $time){ $status = "FAIL"; print $log "TIME LIMIT EXCEEDED\n"; } } $return = $? >> 8; $death_sig = $? & 127; $core_dump = $? & 128; if(fileno $stdout){ @stdout = <$stdout>; } if(fileno $stderr){ @stderr = <$stderr>; } close($stdout); close($stderr);

    Read the article

  • How to debug KVO

    - by user8472
    In my program I use KVO manually to observe changes to values of object properties. I receive an EXC_BAD_ACCESS signal at the following line of code inside a custom setter: [self willChangeValueForKey:@"mykey"]; The weird thing is that this happens when a factory method calls the custom setter and there should not be any observers around. I do not know how to debug this situation. Update: The way to list all registered observers is observationInfo. It turned out that there was indeed an object listed that points to an invalid address. However, I have no idea at all how it got there. Update 2: Apparently, the same object and method callback can be registered several times for a given object - resulting in identical entries in the observed object's observationInfo. When removing the registration only one of these entries is removed. This behavior is a little counter-intuitive (and it certainly is a bug in my program to add multiple entries at all), but this does not explain how spurious observers can mysteriously show up in freshly allocated objects (unless there is some caching/reuse going on that I am unaware of). Modified question: How can I figure out WHERE and WHEN an object got registered as an observer? Update 3: Specific sample code. ContentObj is a class that has a dictionary as a property named mykey. It overrides: + (BOOL)automaticallyNotifiesObserversForKey:(NSString *)theKey { BOOL automatic = NO; if ([theKey isEqualToString:@"mykey"]) { automatic = NO; } else { automatic=[super automaticallyNotifiesObserversForKey:theKey]; } return automatic; } A couple of properties have getters and setters as follows: - (CGFloat)value { return [[[self mykey] objectForKey:@"value"] floatValue]; } - (void)setValue:(CGFloat)aValue { [self willChangeValueForKey:@"mykey"]; [[self mykey] setObject:[NSNumber numberWithFloat:aValue] forKey:@"value"]; [self didChangeValueForKey:@"mykey"]; } The container class has a property contents of class NSMutableArray which holds instances of class ContentObj. It has a couple of methods that manually handle registrations: + (BOOL)automaticallyNotifiesObserversForKey:(NSString *)theKey { BOOL automatic = NO; if ([theKey isEqualToString:@"contents"]) { automatic = NO; } else { automatic=[super automaticallyNotifiesObserversForKey:theKey]; } return automatic; } - (void)observeContent:(ContentObj *)cObj { [cObj addObserver:self forKeyPath:@"mykey" options:0 context:NULL]; } - (void)removeObserveContent:(ContentObj *)cObj { [cObj removeObserver:self forKeyPath:@"mykey"]; } - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { if (([keyPath isEqualToString:@"mykey"]) && ([object isKindOfClass:[ContentObj class]])) { [self willChangeValueForKey:@"contents"]; [self didChangeValueForKey:@"contents"]; } } There are several methods in the container class that modify contents. They look as follows: - (void)addContent:(ContentObj *)cObj { [self willChangeValueForKey:@"contents"]; [self observeDatum:cObj]; [[self contents] addObject:cObj]; [self didChangeValueForKey:@"contents"]; } And a couple of others that provide similar functionality to the array. They all work by adding/removing themselves as observers. Obviously, anything that results in multiple registrations is a bug and could sit somewhere hidden in these methods. My question targets strategies on how to debug this kind of situation. Alternatively, please feel free to provide an alternative strategy for implementing this kind of notification/observer pattern.

    Read the article

  • Reusing my PagedList object on WCF

    - by AlexCode
    The problem: I have a custom collection PagedList<T> that is being returned from my WCF service as PagedListOfEntitySearchResultW_SH0Zpu5 when T is EntitySearchResult object. I want to reuse this PagedList<T> type between the application and the service. My scenario: I've created a PagedList<T> type that inherits from List<T>. This type is on a separated assembly that is referenced on both application and WCF service. I'm using the /reference option on the scvutil to enable the type reusing. I also don't want any arrays returned so I also use the /collection to map to the generic List type. I'm using the following svcutil command to generate the service proxy: "C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\NETFX 4.0 Tools\svcutil.exe" /collectionType:System.Collections.Generic.List`1 /reference:..\..\bin\Debug\App.Utilities.dll http://localhost/App.MyService/MyService.svc?wsdl /namespace:*,"App.ServiceReferences.MyService" /out:..\ServiceProxy\MyService.cs The PagedList object is something like: [CollectionDataContract] public partial class PagedList<T> : List<T> { public PagedList() { } /// <summary> /// Creates a new instance of the PagedList object and doesn't apply any pagination algorithm. /// The only calculated property is the TotalPages, everything else needed must be passed to the object. /// </summary> /// <param name="source"></param> /// <param name="pageNumber"></param> /// <param name="pageSize"></param> /// <param name="totalRecords"></param> public PagedList(IEnumerable<T> source, int pageNumber, int pageSize, int totalRecords) { if (source == null) source = new List<T>(); this.AddRange(source); PagingInfo.PageNumber = pageNumber; PageSize = pageSize; TotalRecords = totalRecords; } public PagedList(IEnumerable<T> source, PagingInfo paging) { this.AddRange(source); this._pagingInfo = paging; } [DataMember] public int TotalRecords { get; set; } [DataMember] public int PageSize { get; set; } public int TotalPages() { if (this.TotalRecords > 0 && PageSize > 0) return (int)Math.Ceiling((double)TotalRecords / (double)PageSize); else return 0; } public bool? HasPreviousPage() { return (PagingInfo.PageNumber > 1); } public bool? HasNextPage() { return (PagingInfo.PageNumber < TotalPages()); } public bool? IsFirstPage() { return PagingInfo.PageNumber == 1; } public bool? IsLastPage() { return PagingInfo.PageNumber == TotalPages(); } PagingInfo _pagingInfo = null; [DataMember] public PagingInfo PagingInfo { get { if (_pagingInfo == null) _pagingInfo = new PagingInfo(); return _pagingInfo; } set { _pagingInfo = value; } } }

    Read the article

  • How can I share Perl data structures through a socket?

    - by pavun_cool
    In sockets I have written the client server program. First I tried to send the normal string among them it sends fine. After that I tried to send the hash and array values from client to server and server to client. When I print the values using Dumper, it gives me only the reference value. What should I do to get the actual values in client server? Server Program: use IO::Socket; use strict; use warnings; my %hash = ( "name" => "pavunkumar " , "age" => 20 ) ; my $new = \%hash ; #Turn on System variable for Buffering output $| = 1; # Creating a a new socket my $socket= IO::Socket::INET->new(LocalPort=>5000,Proto=>'tcp',Localhost => 'localhost','Listen' => 5 , 'Reuse' => 1 ); die "could not create $! \n" unless ( $socket ); print "\nUDPServer Waiting port 5000\n"; my $new_sock = $socket->accept(); my $host = $new_sock->peerhost(); while(<$new_sock>) { #my $line = <$new_sock>; print Dumper "$host $_"; print $new_sock $new . "\n"; } print "$host is closed \n" ; Client Program use IO::Socket; use Data::Dumper ; use warnings ; use strict ; my %hash = ( "file" =>"log.txt" , size => "1000kb") ; my $ref = \%hash ; # This client for connecting the specified below address and port # INET function will create the socket file and establish the connection with # server my $port = shift || 5000 ; my $host = shift || 'localhost'; my $recv_data ; my $send_data; my $socket = new IO::Socket::INET ( PeerAddr => $host , PeerPort => $port , Proto => 'tcp', ) or die "Couldn't connect to Server\n"; while (1) { my $line = <stdin> ; print $socket $ref."\n"; if ( $line = <$socket> ) { print Dumper $line ; } else { print "Server is closed \n"; last ; } } I have given my sample program about what I am doing. Can any one tell me what I am doing wrong in this code? And what I need to do for accessing the hash values?

    Read the article

  • Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework

    - by ScottGu
    Today we released VS 2013 and .NET 4.5.1. These releases include a ton of great improvements, and include some fantastic enhancements to ASP.NET and the Entity Framework.  You can download and start using them now. Below are details on a few of the great ASP.NET, Web Development, and Entity Framework improvements you can take advantage of with this release.  Please visit http://www.asp.net/vnext for additional release notes, documentation, and tutorials. One ASP.NET With the release of Visual Studio 2013, we have taken a step towards unifying the experience of using the different ASP.NET sub-frameworks (Web Forms, MVC, Web API, SignalR, etc), and you can now easily mix and match the different ASP.NET technologies you want to use within a single application. When you do a File-New Project with VS 2013 you’ll now see a single ASP.NET Project option: Selecting this project will bring up an additional dialog that allows you to start with a base project template, and then optionally add/remove the technologies you want to use in it.  For example, you could start with a Web Forms template and add Web API or Web Forms support for it, or create a MVC project and also enable Web Forms pages within it: This makes it easy for you to use any ASP.NET technology you want within your apps, and take advantage of any feature across the entire ASP.NET technology span. Richer Authentication Support The new “One ASP.NET” project dialog also includes a new Change Authentication button that, when pushed, enables you to easily change the authentication approach used by your applications – and makes it much easier to build secure applications that enable SSO from a variety of identity providers.  For example, when you start with the ASP.NET Web Forms or MVC templates you can easily add any of the following authentication options to the application: No Authentication Individual User Accounts (Single Sign-On support with FaceBook, Twitter, Google, and Microsoft ID – or Forms Auth with ASP.NET Membership) Organizational Accounts (Single Sign-On support with Windows Azure Active Directory ) Windows Authentication (Active Directory in an intranet application) The Windows Azure Active Directory support is particularly cool.  Last month we updated Windows Azure Active Directory so that developers can now easily create any number of Directories using it (for free and deployed within seconds).  It now takes only a few moments to enable single-sign-on support within your ASP.NET applications against these Windows Azure Active Directories.  Simply choose the “Organizational Accounts” radio button within the Change Authentication dialog and enter the name of your Windows Azure Active Directory to do this: This will automatically configure your ASP.NET application to use Windows Azure Active Directory and register the application with it.  Now when you run the app your users can easily and securely sign-in using their Active Directory credentials within it – regardless of where the application is hosted on the Internet. For more information about the new process for creating web projects, see Creating ASP.NET Web Projects in Visual Studio 2013. Responsive Project Templates with Bootstrap The new default project templates for ASP.NET Web Forms, MVC, Web API and SPA are built using Bootstrap. Bootstrap is an open source CSS framework that helps you build responsive websites which look great on different form factors such as mobile phones, tables and desktops. For example in a browser window the home page created by the MVC template looks like the following: When you resize the browser to a narrow window to see how it would like on a phone, you can notice how the contents gracefully wrap around and the horizontal top menu turns into an icon: When you click the menu-icon above it expands into a vertical menu – which enables a good navigation experience for small screen real-estate devices: We think Bootstrap will enable developers to build web applications that work even better on phones, tablets and other mobile devices – and enable you to easily build applications that can leverage the rich ecosystem of Bootstrap CSS templates already out there.  You can learn more about Bootstrap here. Visual Studio Web Tooling Improvements Visual Studio 2013 includes a new, much richer, HTML editor for Razor files and HTML files in web applications. The new HTML editor provides a single unified schema based on HTML5. It has automatic brace completion, jQuery UI and AngularJS attribute IntelliSense, attribute IntelliSense Grouping, and other great improvements. For example, typing “ng-“ on an HTML element will show the intellisense for AngularJS: This support for AngularJS, Knockout.js, Handlebars and other SPA technologies in this release of ASP.NET and VS 2013 makes it even easier to build rich client web applications: The screen shot below demonstrates how the HTML editor can also now inspect your page at design-time to determine all of the CSS classes that are available. In this case, the auto-completion list contains classes from Bootstrap’s CSS file. No more guessing at which Bootstrap element names you need to use: Visual Studio 2013 also comes with built-in support for both CoffeeScript and LESS editing support. The LESS editor comes with all the cool features from the CSS editor and has specific Intellisense for variables and mixins across all the LESS documents in the @import chain. Browser Link – SignalR channel between browser and Visual Studio The new Browser Link feature in VS 2013 lets you run your app within multiple browsers on your dev machine, connect them to Visual Studio, and simultaneously refresh all of them just by clicking a button in the toolbar. You can connect multiple browsers (including IE, FireFox, Chrome) to your development site, including mobile emulators, and click refresh to refresh all the browsers all at the same time.  This makes it much easier to easily develop/test against multiple browsers in parallel. Browser Link also exposes an API to enable developers to write Browser Link extensions.  By enabling developers to take advantage of the Browser Link API, it becomes possible to create very advanced scenarios that crosses boundaries between Visual Studio and any browser that’s connected to it. Web Essentials takes advantage of the API to create an integrated experience between Visual Studio and the browser’s developer tools, remote controlling mobile emulators and a lot more. You will see us take advantage of this support even more to enable really cool scenarios going forward. ASP.NET Scaffolding ASP.NET Scaffolding is a new code generation framework for ASP.NET Web applications. It makes it easy to add boilerplate code to your project that interacts with a data model. In previous versions of Visual Studio, scaffolding was limited to ASP.NET MVC projects. With Visual Studio 2013, you can now use scaffolding for any ASP.NET project, including Web Forms. When using scaffolding, we ensure that all required dependencies are automatically installed for you in the project. For example, if you start with an ASP.NET Web Forms project and then use scaffolding to add a Web API Controller, the required NuGet packages and references to enable Web API are added to your project automatically.  To do this, just choose the Add->New Scaffold Item context menu: Support for scaffolding async controllers uses the new async features from Entity Framework 6. ASP.NET Identity ASP.NET Identity is a new membership system for ASP.NET applications that we are introducing with this release. ASP.NET Identity makes it easy to integrate user-specific profile data with application data. ASP.NET Identity also allows you to choose the persistence model for user profiles in your application. You can store the data in a SQL Server database or another data store, including NoSQL data stores such as Windows Azure Storage Tables. ASP.NET Identity also supports Claims-based authentication, where the user’s identity is represented as a set of claims from a trusted issuer. Users can login by creating an account on the website using username and password, or they can login using social identity providers (such as Microsoft Account, Twitter, Facebook, Google) or using organizational accounts through Windows Azure Active Directory or Active Directory Federation Services (ADFS). To learn more about how to use ASP.NET Identity visit http://www.asp.net/identity.  ASP.NET Web API 2 ASP.NET Web API 2 has a bunch of great improvements including: Attribute routing ASP.NET Web API now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your Web API routes by annotating your actions and controllers like this: OAuth 2.0 support The Web API and Single Page Application project templates now support authorization using OAuth 2.0. OAuth 2.0 is a framework for authorizing client access to protected resources. It works for a variety of clients including browsers and mobile devices. OData Improvements ASP.NET Web API also now provides support for OData endpoints and enables support for both ATOM and JSON-light formats. With OData you get support for rich query semantics, paging, $metadata, CRUD operations, and custom actions over any data source. Below are some of the specific enhancements in ASP.NET Web API 2 OData. Support for $select, $expand, $batch, and $value Improved extensibility Type-less support Reuse an existing model OWIN Integration ASP.NET Web API now fully supports OWIN and can be run on any OWIN capable host. With OWIN integration, you can self-host Web API in your own process alongside other OWIN middleware, such as SignalR. For more information, see Use OWIN to Self-Host ASP.NET Web API. More Web API Improvements In addition to the features above there have been a host of other features in ASP.NET Web API, including CORS support Authentication Filters Filter Overrides Improved Unit Testability Portable ASP.NET Web API Client To learn more go to http://www.asp.net/web-api/ ASP.NET SignalR 2 ASP.NET SignalR is library for ASP.NET developers that dramatically simplifies the process of adding real-time web functionality to your applications. Real-time web functionality is the ability to have server-side code push content to connected clients instantly as it becomes available. SignalR 2.0 introduces a ton of great improvements. We’ve added support for Cross-Origin Resource Sharing (CORS) to SignalR 2.0. iOS and Android support for SignalR have also been added using the MonoTouch and MonoDroid components from the Xamarin library (for more information on how to use these additions, see the article Using Xamarin Components from the SignalR wiki). We’ve also added support for the Portable .NET Client in SignalR 2.0 and created a new self-hosting package. This change makes the setup process for SignalR much more consistent between web-hosted and self-hosted SignalR applications. To learn more go to http://www.asp.net/signalr. ASP.NET MVC 5 The ASP.NET MVC project templates integrate seamlessly with the new One ASP.NET experience and enable you to integrate all of the above ASP.NET Web API, SignalR and Identity improvements. You can also customize your MVC project and configure authentication using the One ASP.NET project creation wizard. The MVC templates have also been updated to use ASP.NET Identity and Bootstrap as well. An introductory tutorial to ASP.NET MVC 5 can be found at Getting Started with ASP.NET MVC 5. This release of ASP.NET MVC also supports several nice new MVC-specific features including: Authentication filters: These filters allow you to specify authentication logic per-action, per-controller or globally for all controllers. Attribute Routing: Attribute Routing allows you to define your routes on actions or controllers. To learn more go to http://www.asp.net/mvc Entity Framework 6 Improvements Visual Studio 2013 ships with Entity Framework 6, which bring a lot of great new features to the data access space: Async and Task<T> Support EF6’s new Async Query and Save support enables you to perform asynchronous data access and take advantage of the Task<T> support introduced in .NET 4.5 within data access scenarios.  This allows you to free up threads that might otherwise by blocked on data access requests, and enable them to be used to process other requests whilst you wait for the database engine to process operations. When the database server responds the thread will be re-queued within your ASP.NET application and execution will continue.  This enables you to easily write significantly more scalable server code. Here is an example ASP.NET WebAPI action that makes use of the new EF6 async query methods: Interception and Logging Interception and SQL logging allows you to view – or even change – every command that is sent to the database by Entity Framework. This includes a simple, human readable log – which is great for debugging – as well as some lower level building blocks that give you access to the command and results. Here is an example of wiring up the simple log to Debug in the constructor of an MVC controller: Custom Code-First Conventions The new Custom Code-First Conventions enable bulk configuration of a Code First model – reducing the amount of code you need to write and maintain. Conventions are great when your domain classes don’t match the Code First conventions. For example, the following convention configures all properties that are called ‘Key’ to be the primary key of the entity they belong to. This is different than the default Code First convention that expects Id or <type name>Id. Connection Resiliency The new Connection Resiliency feature in EF6 enables you to register an execution strategy to handle – and potentially retry – failed database operations. This is especially useful when deploying to cloud environments where dropped connections become more common as you traverse load balancers and distributed networks. EF6 includes a built-in execution strategy for SQL Azure that knows about retryable exception types and has some sensible – but overridable – defaults for the number of retries and time between retries when errors occur. Registering it is simple using the new Code-Based Configuration support: These are just some of the new features in EF6. You can visit the release notes section of the Entity Framework site for a complete list of new features. Microsoft OWIN Components Open Web Interface for .NET (OWIN) defines an open abstraction between .NET web servers and web applications, and the ASP.NET “Katana” project brings this abstraction to ASP.NET. OWIN decouples the web application from the server, making web applications host-agnostic. For example, you can host an OWIN-based web application in IIS or self-host it in a custom process. For more information about OWIN and Katana, see What's new in OWIN and Katana. Summary Today’s Visual Studio 2013, ASP.NET and Entity Framework release delivers some fantastic new features that streamline your web development lifecycle. These feature span from server framework to data access to tooling to client-side HTML development.  They also integrate some great open-source technology and contributions from our developer community. Download and start using them today! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 3 – Table per Concrete Type (TPC) and Choosing Strategy Guidelines

    - by mortezam
    This is the third (and last) post in a series that explains different approaches to map an inheritance hierarchy with EF Code First. I've described these strategies in previous posts: Part 1 – Table per Hierarchy (TPH) Part 2 – Table per Type (TPT)In today’s blog post I am going to discuss Table per Concrete Type (TPC) which completes the inheritance mapping strategies supported by EF Code First. At the end of this post I will provide some guidelines to choose an inheritance strategy mainly based on what we've learned in this series. TPC and Entity Framework in the Past Table per Concrete type is somehow the simplest approach suggested, yet using TPC with EF is one of those concepts that has not been covered very well so far and I've seen in some resources that it was even discouraged. The reason for that is just because Entity Data Model Designer in VS2010 doesn't support TPC (even though the EF runtime does). That basically means if you are following EF's Database-First or Model-First approaches then configuring TPC requires manually writing XML in the EDMX file which is not considered to be a fun practice. Well, no more. You'll see that with Code First, creating TPC is perfectly possible with fluent API just like other strategies and you don't need to avoid TPC due to the lack of designer support as you would probably do in other EF approaches. Table per Concrete Type (TPC)In Table per Concrete type (aka Table per Concrete class) we use exactly one table for each (nonabstract) class. All properties of a class, including inherited properties, can be mapped to columns of this table, as shown in the following figure: As you can see, the SQL schema is not aware of the inheritance; effectively, we’ve mapped two unrelated tables to a more expressive class structure. If the base class was concrete, then an additional table would be needed to hold instances of that class. I have to emphasize that there is no relationship between the database tables, except for the fact that they share some similar columns. TPC Implementation in Code First Just like the TPT implementation, we need to specify a separate table for each of the subclasses. We also need to tell Code First that we want all of the inherited properties to be mapped as part of this table. In CTP5, there is a new helper method on EntityMappingConfiguration class called MapInheritedProperties that exactly does this for us. Here is the complete object model as well as the fluent API to create a TPC mapping: public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } }          public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } }          public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } }      public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; }              protected override void OnModelCreating(ModelBuilder modelBuilder)     {         modelBuilder.Entity<BankAccount>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("BankAccounts");         });         modelBuilder.Entity<CreditCard>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("CreditCards");         });                 } } The Importance of EntityMappingConfiguration ClassAs a side note, it worth mentioning that EntityMappingConfiguration class turns out to be a key type for inheritance mapping in Code First. Here is an snapshot of this class: namespace System.Data.Entity.ModelConfiguration.Configuration.Mapping {     public class EntityMappingConfiguration<TEntityType> where TEntityType : class     {         public ValueConditionConfiguration Requires(string discriminator);         public void ToTable(string tableName);         public void MapInheritedProperties();     } } As you have seen so far, we used its Requires method to customize TPH. We also used its ToTable method to create a TPT and now we are using its MapInheritedProperties along with ToTable method to create our TPC mapping. TPC Configuration is Not Done Yet!We are not quite done with our TPC configuration and there is more into this story even though the fluent API we saw perfectly created a TPC mapping for us in the database. To see why, let's start working with our object model. For example, the following code creates two new objects of BankAccount and CreditCard types and tries to add them to the database: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount();     CreditCard creditCard = new CreditCard() { CardType = 1 };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Running this code throws an InvalidOperationException with this message: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. The reason we got this exception is because DbContext.SaveChanges() internally invokes SaveChanges method of its internal ObjectContext. ObjectContext's SaveChanges method on its turn by default calls AcceptAllChanges after it has performed the database modifications. AcceptAllChanges method merely iterates over all entries in ObjectStateManager and invokes AcceptChanges on each of them. Since the entities are in Added state, AcceptChanges method replaces their temporary EntityKey with a regular EntityKey based on the primary key values (i.e. BillingDetailId) that come back from the database and that's where the problem occurs since both the entities have been assigned the same value for their primary key by the database (i.e. on both BillingDetailId = 1) and the problem is that ObjectStateManager cannot track objects of the same type (i.e. BillingDetail) with the same EntityKey value hence it throws. If you take a closer look at the TPC's SQL schema above, you'll see why the database generated the same values for the primary keys: the BillingDetailId column in both BankAccounts and CreditCards table has been marked as identity. How to Solve The Identity Problem in TPC As you saw, using SQL Server’s int identity columns doesn't work very well together with TPC since there will be duplicate entity keys when inserting in subclasses tables with all having the same identity seed. Therefore, to solve this, either a spread seed (where each table has its own initial seed value) will be needed, or a mechanism other than SQL Server’s int identity should be used. Some other RDBMSes have other mechanisms allowing a sequence (identity) to be shared by multiple tables, and something similar can be achieved with GUID keys in SQL Server. While using GUID keys, or int identity keys with different starting seeds will solve the problem but yet another solution would be to completely switch off identity on the primary key property. As a result, we need to take the responsibility of providing unique keys when inserting records to the database. We will go with this solution since it works regardless of which database engine is used. Switching Off Identity in Code First We can switch off identity simply by placing DatabaseGenerated attribute on the primary key property and pass DatabaseGenerationOption.None to its constructor. DatabaseGenerated attribute is a new data annotation which has been added to System.ComponentModel.DataAnnotations namespace in CTP5: public abstract class BillingDetail {     [DatabaseGenerated(DatabaseGenerationOption.None)]     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } As always, we can achieve the same result by using fluent API, if you prefer that: modelBuilder.Entity<BillingDetail>()             .Property(p => p.BillingDetailId)             .HasDatabaseGenerationOption(DatabaseGenerationOption.None); Working With The Object Model Our TPC mapping is ready and we can try adding new records to the database. But, like I said, now we need to take care of providing unique keys when creating new objects: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount()      {          BillingDetailId = 1                          };     CreditCard creditCard = new CreditCard()      {          BillingDetailId = 2,         CardType = 1     };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Polymorphic Associations with TPC is Problematic The main problem with this approach is that it doesn’t support Polymorphic Associations very well. After all, in the database, associations are represented as foreign key relationships and in TPC, the subclasses are all mapped to different tables so a polymorphic association to their base class (abstract BillingDetail in our example) cannot be represented as a simple foreign key relationship. For example, consider the the domain model we introduced here where User has a polymorphic association with BillingDetail. This would be problematic in our TPC Schema, because if User has a many-to-one relationship with BillingDetail, the Users table would need a single foreign key column, which would have to refer both concrete subclass tables. This isn’t possible with regular foreign key constraints. Schema Evolution with TPC is Complex A further conceptual problem with this mapping strategy is that several different columns, of different tables, share exactly the same semantics. This makes schema evolution more complex. For example, a change to a base class property results in changes to multiple columns. It also makes it much more difficult to implement database integrity constraints that apply to all subclasses. Generated SQLLet's examine SQL output for polymorphic queries in TPC mapping. For example, consider this polymorphic query for all BillingDetails and the resulting SQL statements that being executed in the database: var query = from b in context.BillingDetails select b; Just like the SQL query generated by TPT mapping, the CASE statements that you see in the beginning of the query is merely to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type). TPC's SQL Queries are Union Based As you can see in the above screenshot, the first SELECT uses a FROM-clause subquery (which is selected with a red rectangle) to retrieve all instances of BillingDetails from all concrete class tables. The tables are combined with a UNION operator, and a literal (in this case, 0 and 1) is inserted into the intermediate result; (look at the lines highlighted in yellow.) EF reads this to instantiate the correct class given the data from a particular row. A union requires that the queries that are combined, project over the same columns; hence, EF has to pad and fill up nonexistent columns with NULL. This query will really perform well since here we can let the database optimizer find the best execution plan to combine rows from several tables. There is also no Joins involved so it has a better performance than the SQL queries generated by TPT where a Join is required between the base and subclasses tables. Choosing Strategy GuidelinesBefore we get into this discussion, I want to emphasize that there is no one single "best strategy fits all scenarios" exists. As you saw, each of the approaches have their own advantages and drawbacks. Here are some rules of thumb to identify the best strategy in a particular scenario: If you don’t require polymorphic associations or queries, lean toward TPC—in other words, if you never or rarely query for BillingDetails and you have no class that has an association to BillingDetail base class. I recommend TPC (only) for the top level of your class hierarchy, where polymorphism isn’t usually required, and when modification of the base class in the future is unlikely. If you do require polymorphic associations or queries, and subclasses declare relatively few properties (particularly if the main difference between subclasses is in their behavior), lean toward TPH. Your goal is to minimize the number of nullable columns and to convince yourself (and your DBA) that a denormalized schema won’t create problems in the long run. If you do require polymorphic associations or queries, and subclasses declare many properties (subclasses differ mainly by the data they hold), lean toward TPT. Or, depending on the width and depth of your inheritance hierarchy and the possible cost of joins versus unions, use TPC. By default, choose TPH only for simple problems. For more complex cases (or when you’re overruled by a data modeler insisting on the importance of nullability constraints and normalization), you should consider the TPT strategy. But at that point, ask yourself whether it may not be better to remodel inheritance as delegation in the object model (delegation is a way of making composition as powerful for reuse as inheritance). Complex inheritance is often best avoided for all sorts of reasons unrelated to persistence or ORM. EF acts as a buffer between the domain and relational models, but that doesn’t mean you can ignore persistence concerns when designing your classes. SummaryIn this series, we focused on one of the main structural aspect of the object/relational paradigm mismatch which is inheritance and discussed how EF solve this problem as an ORM solution. We learned about the three well-known inheritance mapping strategies and their implementations in EF Code First. Hopefully it gives you a better insight about the mapping of inheritance hierarchies as well as choosing the best strategy for your particular scenario. Happy New Year and Happy Code-Firsting! References ADO.NET team blog Java Persistence with Hibernate book a { color: #5A99FF; } a:visited { color: #5A99FF; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } .exception { background-color: #f0f0f0; font-style: italic; padding-bottom: 5px; padding-left: 5px; padding-top: 5px; padding-right: 5px; }

    Read the article

  • CodePlex Daily Summary for Thursday, November 18, 2010

    CodePlex Daily Summary for Thursday, November 18, 2010Popular ReleasesSitefinity Migration Tool: Sitefinity Migration Tool 0.2 Alpha: - Improvements for the Sitefinity RC releaseMiniTwitter: 1.57: MiniTwitter 1.57 ???? ?? ?????????????????? ?? User Streams ????????????????????? ???????????????·??????·???????VFPX: VFP2C32 2.0.0.7: fixed a bug in AAverage - NULL values in the array corrupted the result removed limitation in ASum, AMin, AMax, AAverage - the functions were limited to 65000 elements, now they're limited to 65000 rows ASplitStr now returns a 1 element array with an empty string when an empty string is passed (behaves more like ALINES) internal code cleanup and optimization: optimized FoxArray class - results in a speedup of 10-20% in many functions which return the result in an array - like AProcesses...Microsoft SQL Server Product Samples: Database: AdventureWorks 2008R2 SR1: Sample Databases for Microsoft SQL Server 2008R2 (SR1)This release is dedicated to the sample databases that ship for Microsoft SQL Server 2008R2. See Database Prerequisites for SQL Server 2008R2 for feature configurations required for installing the sample databases. See Installing SQL Server 2008R2 Databases for step by step installation instructions. The SR1 release contains minor bug fixes to the installer used to create the sample databases. There are no changes to the databases them...VidCoder: 0.7.2: Fixed duplicated subtitles when running multiple encodes off of the same title.Razor Templating Engine: Razor Template Engine v1.1: Release 1.1 Changes: ADDED: Signed assemblies with strong name to allow assemblies to be referenced by other strongly-named assemblies. FIX: Filter out dynamic assemblies which causes failures in template compilation. FIX: Changed ASCII to UTF8 encoding to support UTF-8 encoded string templates. FIX: Corrected implementation of TemplateBase adding ITemplate interface.Prism Training Kit: Prism Training Kit - 1.1: This is an updated version of the Prism training Kit that targets Prism 4.0 and fixes the bugs reported in the version 1.0. This release consists of a Training Kit with Labs on the following topics Modularity Dependency Injection Bootstrapper UI Composition Communication Note: Take into account that this is a Beta version. If you find any bugs please report them in the Issue Tracker PrerequisitesVisual Studio 2010 Microsoft Word 2007/2010 Microsoft Silverlight 4 Microsoft S...Craig's Utility Library: Craig's Utility Library Code 2.0: This update contains a number of changes, added functionality, and bug fixes: Added transaction support to SQLHelper. Added linked/embedded resource ability to EmailSender. Updated List to take into account new functions. Added better support for MAC address in WMI classes. Fixed Parsing in Reflection class when dealing with sub classes. Fixed bug in SQLHelper when replacing the Command that is a select after doing a select. Fixed issue in SQL Server helper with regard to generati...MFCMAPI: November 2010 Release: Build: 6.0.0.1023 Full release notes at SGriffin's blog. If you just want to run the tool, get the executable. If you want to debug it, get the symbol file and the source. The 64 bit build will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit build, regardless of the operating system. Facebook BadgeDotNetNuke® Community Edition: 05.06.00: Major HighlightsAdded automatic portal alias creation for single portal installs Updated the file manager upload page to allow user to upload multiple files without returning to the file manager page. Fixed issue with Event Log Email Notifications. Fixed issue where Telerik HTML Editor was unable to upload files to secure or database folder. Fixed issue where registration page is not set correctly during an upgrade. Fixed issue where Sendmail stripped HTML and Links from emails...mVu Mobile Viewer: mVu Mobile Viewer 0.7.10.0: Tube8 fix.EPPlus-Create advanced Excel 2007 spreadsheets on the server: EPPlus 2.8.0.1: EPPlus-Create advanced Excel 2007 spreadsheets on the serverNew Features Improved chart support Different chart-types series on the same chart Support for secondary axis and a lot of new properties Better styling Encryption and Workbook protection Table support Import csv files Array formulas ...and a lot of bugfixesAutoLoL: AutoLoL v1.4.2: Added support for more clients (French and Russian) Settings are now stored sepperatly for each user on a computer Auto Login is much faster now Auto Login detects and handles caps lock state properly nowTailspinSpyworks - WebForms Sample Application: TailspinSpyworks-v0.9: Contains a number of bug fixes and additional tutorial steps as well as complete database implementation details.ASP.NET MVC Project Awesome (rich jQuery AJAX helpers): 1.3 and demos: a library with mvc helpers and a demo project that demonstrates an awesome way of doing asp.net mvc. tested on mozilla, safari, chrome, opera, ie 9b/8/7/6 new stuff in 1.3 Autocomplete helper Autocomplete and AjaxDropdown can have parentId and be filled with data depending on the value of the parent PopupForm besides Content("ok") on success can also return Json(data) and use 'data' in a client side function Awesome demo improved (cruder, builder, added service layer)Nearforums - ASP.NET MVC forum engine: Nearforums v4.1: Version 4.1 of the ASP.NET MVC forum engine, with great improvements: TinyMCE added as visual editor for messages (removed CKEditor). Integrated AntiSamy for cleaner html user post and add more prevention to potential injections. Admin status page: a page for the site admin to check the current status of the configuration / db / etc. View Roadmap for more details.UltimateJB: UltimateJB 2.01 PL3 KakaRoto + PSNYes by EvilSperm: Voici une version attendu avec impatience pour beaucoup : - La Version PSNYes pour pouvoir jouer sur le PSN avec une PS3 Jailbreaker. - Pour l'instant le PSNYes n'est disponible qu'avec les PS3 en firmwares 3.41 !!! - La version PL3 KAKAROTO intégre ses dernières modification et prépare a l'intégration du Firmware 3.30 !!! Conclusion : - UltimateJB PSNYes => Valide l'utilisation du PSN : Uniquement compatible avec les 3.41 - ultimateJB DEFAULT => Pas de PSN mais disponible pour les PS3 sui...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 2.0: Fluent Ribbon Control Suite 2.0(supports .NET 4.0 RTM and .NET 3.5) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples (only for .NET 4.0) Foundation (Tabs, Groups, Contextual Tabs, Quick Access Toolbar, Backstage) Resizing (ribbon reducing & enlarging principles) Galleries (Gallery in ContextMenu, InRibbonGallery) MVVM (shows how to use this library with Model-View-ViewModel pattern) KeyTips ScreenTips Toolbars ColorGallery NEW! *Walkthrough (documenta...patterns & practices: Prism: Prism 4 Documentation: This release contains the Prism 4 documentation in Help 1.0 (CHM) format and PDF format. The documentation is also included with the full download of the guidance. Note: If you cannot view the content of the CHM, using Windows Explorer, select the properties for the file and then click Unblock on the General tab. Note: The PDF version of the guidance is provided for printing and reading in book format. The online version of the Prism 4 documentation can be read here.Farseer Physics Engine: Farseer Physics Engine 3.1: DonationsIf you like this release and would like to keep Farseer Physics Engine running, please consider a small donation. What's new?We bring a lot of new features in Farseer Physics Engine 3.1. Just to name a few: New Box2D core Rope joint added More stable CCD algorithm YuPeng clipper Explosives logic New Constrained Delaunay Triangulation algorithm from the Poly2Tri project. New Flipcode triangulation algorithm. Silverlight 4 samples Silverlight 4 debug view XNA 4.0 relea...New Projectsbizicosoft crm: crmBlog Migrator: The Blog Migrator tool is an all purpose utility designed to help transition a blog from one platform to another. It leverages XML-RPC, BlogML, and WordPress WXR formats. It also provides the ability to "rewrite" your posts on your old blog to point to the new location.bzr-tfs integration tests: Used to test bzr-tfs integrationC++ Open Source Advanced Operating System: C++ Open Source Advanced Operating System is a project which allows starter developers create their own OS. For now it is at a really initial stage.Chavah - internet radio for Yeshua's disciples: Chavah (pronounced "ha-vah") is internet radio for Yeshua's disciples. Inspired by Pandora, Chavah is a Silverlight application that brings community-driven Messianic Jewish tunes for the Lord over the web to your eager ears.CodePoster: An add-in for Visual Studio which allows you to post code directly from Visual Studio to your blog. CRM 2011 Plugin Testing Tools: This solution is meant to make unit testing of plugins in CRM 2011 a simpler and more efficient process. This solution serializes the objects that the CRM server passes to a plugin on execution and then offers a library that allows you to deserialize them in a unit test.Edinamarry Free Tarot Software for Windows: A freeware yet an advanced Tarot reading divinity Software for Psychics and for all those who practice Divinity and Spirituality. This software includes Tarot Spread Designer, Tarot Deck Designer, Tarot Cards Gallery, Client & Customer Profile, Word Editor, Tarot Reader, etc.EPiSocial: Social addons for EPiServer.first team foundation project: this is my first project for the student to teach them about the ms visual studio 201o and team foundation serverFKTdev: Proyecto donde subiremos las pruebas, códigos de ejemplo y demás recursos en nuestro aprendizaje en XNA, hasta que comencemos un desarrollo estable.Gardens Point Component Pascal: Gardens Point Component Pascal is an implementation for .NET of the Component Pascal Language (CP). CP is an object oriented version of Pascal, and shares many design features with Oberon-2. Geoinformatics: geoinformaticsGREENHOUSEMANAGER: GREENHOUSE es un proyecto universitario para manejar los distintos aspectos de un invernadero. El sistema esta desarrollado en c# con interfaz grafica en WPFHousing: This project is only for the asp.net learning. HR-XML.NET: A .NET HR-XML Serialization Library. Also supports the Dutch SETU standard and some proprietary extensions used in the Netherlands. The project is currently targeting HR-XML version 2.5 and Setu standard 2008-01.InternetShop2: ShopLesson4: Lesson4 for M.Logical Synchronous Circuit Simulator: As part of a student project, we are trying to make a logic synchronous circuit simulator, with the ultimate goal of simulating a processor and a digital clock running on it.MediaOwl: MediaOwl is a music (albums, artists, tracks, tags) and movie (movies, series, actors, directors, genres) search engine, but above all, it is a Microsoft Silverlight 4 application (C#), that shows how to use Caliburn Micro.N2F Yverdon Solar Flare Reflector: The solar flare reflector provides minimal base-range protection for your N2F Yverdon installation against solar flare interference.Netduino Plus Home Automation Toolkit: The Netduino Plus Home Automation project is designed to proivde a communication platform from various consumer based home automation products that offer a common web service endpoint. This will hopefully create a low cost DIY alternative to the expensive ethernet interfaces.NRapid: NRapidOfficeHelper: Wrapper around the open xml office package. You can easily create xlsx documents based on a template xlsx document and reuse parts from that document, if you mark them as named ranges (i.e. "names").OffProjects: This is a private project which for my dev investigationParis Velib Stations for Windows Mobile: Allow to find the closest Velib bike station in Paris on a Windows Mobile Phone (6.5)/ Permet de trouver la station de Vélib la plus proche dans Paris ainsi que ses informations sur un smartphone Windows MobilePolarConverter: Adjust the measured distance of HRM files created by Polar Heart Rate monitorsSexy Select: a jQuery plugin that allows for easy manipulation of select options. Allows for adding, removing, sorting, validation and custom skinningSilverlight Progress Feedback: Demonstrates how to get progress feedback from slow running WPF processes in Silverlight.Silverlight Tabbed Panel: Tabbed Panel based on Silverlight targeted for both developers and designers audience. Tabbed Control is used in this project. This is a basic application. More features will be added in further releases. XAML has been used to design this panel. slabhid: SLABHIDDevice.dll is used for the SLAB MCU example code on PC, the original source code is written by C++. This wrapper class brings SLABHIDDevice.dll to the .Net world, so it will be possible to make some quick solution for firmware testing purpose.SuperWebSocket: A .NET server side implementation of WebSocket protocol.test1-jjoiner: just a test projectTotem Alpha Developer Framework For .Net: ????tadf??VS.NET???????????,????jtadf???????????????。 ?????????tadf??????????????J2EE???????VS.NET?????????,??tadf?????.NET??,???????????,????????????,??????C#??????????Java???????,??????。 tadf?????????????,????HTML???????????,???????,?????????,?????。tadf???????????,????????RICH UI?????WEB??。??????,??。 tadf?????????????????????,????WEB??????????。???????,???????????,?Ajax???????,????????????????,????????,????????????????。???????????,???????????????????????????????,?xml??????,?????????????xml...Ukázkové projekty: Obsahuje ukázkové projekty uživatele TenCoKaciStromy.WPFDemo: This Peoject is only for the WPF learning.Xinx TimeIt!: TinyAlarm is a small utility that allows you to configure an Alarm so that you can opt for 1. Shutdown computer 2. Play a sound 3. Show a note with sound 4. Disconnect a dial-up connection 5. Connect via dial-up connection

    Read the article

  • SOA Suite Integration: Part 1: Building a Web Service

    - by Anthony Shorten
    Over the next few weeks I will be posting blog entries outlying the SOA Suite integration of the Oracle Utilities Application Framework. This will illustrate how easy it is to integrate by providing some samples. I will use a consistent set of features as examples. The examples will be simple and while will not illustrate ALL the possibilities it will illustrate the relative ease of integration. Think of them as a foundation. You can obviously build upon them. Now, to ease a few customers minds, this series will certainly feature the latest version of SOA Suite and the latest version of Oracle Utilities Application Framework but the principles will apply to past versions of both those products. So if you have Oracle SOA Suite 10g or are a customer of Oracle Utilities Application Framework V2.1 or above most of what I will show you will work with those versions. It is just easier in Oracle SOA Suite 11g and Oracle Utilities Application Framework V4.x. This first posting will not feature SOA Suite at all but concentrate on the capability for the Oracle Utilities Application Framework to create Web Services you can use for integration. The XML Application Integration (XAI) component of the Oracle Utilities Application Framework allows product objects to be exposed as XML based transactions or as Web Services (or both). XAI was written before Web Services became fashionable and has allowed customers of our products to provide a consistent interface into and out of our product line. XAI has been enhanced over the last few years to take advantages of the maturing landscape of Web Services in the market place to a point where it now easier to integrate to SOA infrastructure. There are a number of object types that can be exposed as Web Services: Maintenance Objects – These are the lowest level objects that can be exposed as Web Services. Customers of past versions of the product will be familiar with XAI services based upon Maintenance Objects as they used to be the only method of generating Web Services. These are still supported for background compatibility but are starting to become less popular as they were strict in their structure and were solely attribute based. To generate Maintenance Object based Web Services definition you need to use the XAI Schema Editor component. Business Objects – In Oracle Utilities Application Framework V2.1 we introduced the concept of Business Objects. These are site or industry specific objects that are based upon Maintenance Objects. These allow sites to respecify, in configuration, the structure and elements of a Maintenance Object and other Business Objects (they are true objects with support for inheritance, polymorphism, encapsulation etc.). These can be exposed as Web Services. Business Services – As with Business Objects, we introduced Business Services in Oracle Utilities Application Framework V2.1 which allowed applications services and query zones to be expressed as custom services. These can then be exposed as Web Services via the Business Service definition. Service Scripts - As with Business Objects and Business Services, we introduced Service Scripts in Oracle Utilities Application Framework V2.1. These allow services and/objects to be combined into complex objects or simply expose common routines as callable scripts. These can also be defined as Web Services. For the purpose of this series we will restrict ourselves to Business Objects. The techniques can apply to any of the objects discussed above. Now, lets get to the important bit of this blog post, the creation of a Web Service. To build a Business Object, you first logon to the product and navigate to the Administration Menu by selecting the Admin Menu from the Menu action on left top of the screen (next to Home). A popup menu will appear with the menu’s available. If you do not see the Admin menu then you do not have authority to use it. Here is an example: Navigate to the B menu and select the + symbol next to the Business Object menu item. This indicates that you want to ADD a new Business Object. This menu will appear if you are running Alphabetic mode in your installation (I almost forgot that point). You will be presented with the Business Object maintenance screen. You will fill out the following on the first tab (at a minimum): Business Object – The name of the Business Object. Typically you will make it descriptive and also prefix with CM to denote it as a customization (you can easily find it if you prefix it). As I running this on my personal copy of the product I will use my initials as the prefix and call the sample Web Service “AS-User”. Description – A short description of the object to tell others what it is used for. For my example, I will use “Anthony Shorten’s User Object”. Detailed Description – You can add a long description to help other developers understand your object. I am just going to specify “Anthony Shorten’s Test Object for SOA Suite Integration”. Maintenance Object – As this Business Service is going to be based upon a Maintenance Object I will specify the desired Maintenance Object. In this example, I have decided to use the Framework object USER. Now, I chose this for a number of reasons. It is meaningful, simple and is across all our product lines. I could choose ANY maintenance object I wished to expose (including any custom ones, if I had them). Parent Business Object – If I was not using a Maintenance Object but building a child Business Object against another Business Object, then I would specify the Parent Business Object here. I am not using Parent’s so I will leave this blank. You either use Parent Business Object or Maintenance Object not both. Application Service – Business Objects like other objects are subject to security. You can attach an Application Service to an object to specify which groups of users (remember services are attached to user groups not users) have appropriate access to the object. I will use a default service provided with the product, F1-DFLTS ,as this is just a demonstration so I do not have to be too sophisticated about security. Instance Control – This allows the object to create instances in its objects. You can specify a Business Object purely to hold rules. I am being simple here so I will set it to Allow New Instances to allow the Business Object to be used to create, read, update and delete user records. The rest of the tab I will leave empty as I want this to be a very simple object. Other options allow lots of flexibility. The contents should look like this: Before saving your work, you need to navigate to the Schema tab and specify the contents of your object. I will save some time. When you create an object the schema will only contain the basic root elements of the object (in fact only the schema tag is visible). When you go to the Schema Tab, on the dashboard you will see a BO Schema zone with a solitary button. This will allow you to Generate the Schema for you from our metadata. Click on the Generate button to generate a basic schema from the metadata. You will now see a Schema with the element tags and references to the metadata of the Maintenance object (in the mapField attribute). I could spend a while outlining all the ways you can change the schema with defaults, formatting, tagging etc but the online help has plenty of great examples to illustrate this. You can use the Schema Tips zone in the for more details of the available customizations. Note: The tags are generated from the language pack you have installed. The sample is English so the tags are in English (which is the base language of all installations). If you are using a language pack then the tags will be generated in the language of the user that generated the object. At this point you can save your Business Object by pressing the Save action. At this point you have a basic Business Object based on the USER maintenance object ready for use but it is not defined as a Web Service yet. To do this you need to define the newly created Business Object as an XAI Inbound Service. The easiest and quickest way is to select + off the XAI Inbound Service off the context menu on the Business Object maintenance screen. This will prepopulate the service definition with the following: Adapter – This will be set to Business Adaptor. This indicates that the service is either Business Object, Business Service or Service Script based. Schema Type – Whether the object is a Business Object, Business Service or Service Script. In this case it is a Business Object. Schema Name – The name of the object. In this case it is the Business Object AS-User. Active – Set to Yes. This means the service is available upon startup automatically. You can enable and disable services as needed. Transaction Type – A default transaction type as this is Business Object Service. More about this in later postings. In our case we use the default Read. This means that if we only specify data and not a transaction type then the product will assume you want to issue a read against the object. You need to fill in the following: XAI Inbound Service – The name of the Web Service. Usually people use the same name as the underlying object , in the case of this example, but this can match your sites interfacing standards. By the way you can define multiple XAI Inbound Services/Web Services against the same object if you want. Description and Detail Description – Documentation for your Web Service. I just supplied some basic documentation for this demonstration. You can now save the service definition. Note: There are lots of other options on this screen that allow for behavior of your service to be specified. I will leave them blank for now. When you save the service you are issued with two new pieces of information. XAI Inbound Service Id is a randomly generated identifier used internally by the XAI Servlet. WSDL URL is the WSDL standard URL used for integration. We will take advantage of that in later posts. An example of the definition is shown below: Now you have defined the service but it will only be available when the next server restart or when you flush the data cache. XAI Inbound Services are cached for performance so the cache needs to be told of this new service. To refresh the cache you can use the Admin –> X –> XAI Command menu item. From the command dropdown select Refresh Registry and press Send Command. You will see an XML of the command sent to the server (the presence of the XML means it is finished). If you have an error around the authorization, then check your default user and password settings on the XAI Options menu item. Be careful with flushing the cache as the cache is shared (unless of course you are the only Web Service user on the system – In that case it only affects you). The Web Service is NOW available to be used. To perform a simple test of your new Web Service, navigate to the Admin –> X –> XAI Submission menu item. You will see an open XML request tab. You need to type in the request XML you want to test in the Main tab. The first tag is the XAI Inbound Service Name and the elements are as per your schema (minus the schema tag itself as that is only used internally). My example is as follows (I want to return the details of user SYSUSER) – Remember to close tags. Hitting the Save button will issue the XML and return the response according to the Business Object schema. Now before you panic, you noticed that it did not ask for credentials. It propagates the online credentials to the service call on this function. You now have a Web Service you can use for integration. We will reuse this information in subsequent posts. The process I just described can be used for ANY object in the system you want to expose. This whole process at a minimum can take under a minute. Obviously I only showed the basics but you can at least get an appreciation of the ease of defining a Web Service (just by using a browser). The next posts now build upon this. Hope you enjoyed the post.

    Read the article

  • Using VLOOKUP in Excel

    - by Mark Virtue
    VLOOKUP is one of Excel’s most useful functions, and it’s also one of the least understood.  In this article, we demystify VLOOKUP by way of a real-life example.  We’ll create a usable Invoice Template for a fictitious company. So what is VLOOKUP?  Well, of course it’s an Excel function.  This article will assume that the reader already has a passing understanding of Excel functions, and can use basic functions such as SUM, AVERAGE, and TODAY.  In its most common usage, VLOOKUP is a database function, meaning that it works with database tables – or more simply, lists of things in an Excel worksheet.  What sort of things?   Well, any sort of thing.  You may have a worksheet that contains a list of employees, or products, or customers, or CDs in your CD collection, or stars in the night sky.  It doesn’t really matter. Here’s an example of a list, or database.  In this case it’s a list of products that our fictitious company sells: Usually lists like this have some sort of unique identifier for each item in the list.  In this case, the unique identifier is in the “Item Code” column.  Note:  For the VLOOKUP function to work with a database/list, that list must have a column containing the unique identifier (or “key”, or “ID”), and that column must be the first column in the table.  Our sample database above satisfies this criterion. The hardest part of using VLOOKUP is understanding exactly what it’s for.  So let’s see if we can get that clear first: VLOOKUP retrieves information from a database/list based on a supplied instance of the unique identifier. Put another way, if you put the VLOOKUP function into a cell and pass it one of the unique identifiers from your database, it will return you one of the pieces of information associated with that unique identifier.  In the example above, you would pass VLOOKUP an item code, and it would return to you either the corresponding item’s description, its price, or its availability (its “In stock” quantity).  Which of these pieces of information will it pass you back?  Well, you get to decide this when you’re creating the formula. If all you need is one piece of information from the database, it would be a lot of trouble to go to to construct a formula with a VLOOKUP function in it.  Typically you would use this sort of functionality in a reusable spreadsheet, such as a template.  Each time someone enters a valid item code, the system would retrieve all the necessary information about the corresponding item. Let’s create an example of this:  An Invoice Template that we can reuse over and over in our fictitious company. First we start Excel… …and we create ourselves a blank invoice: This is how it’s going to work:  The person using the invoice template will fill in a series of item codes in column “A”, and the system will retrieve each item’s description and price, which will be used to calculate the line total for each item (assuming we enter a valid quantity). For the purposes of keeping this example simple, we will locate the product database on a separate sheet in the same workbook: In reality, it’s more likely that the product database would be located in a separate workbook.  It makes little difference to the VLOOKUP function, which doesn’t really care if the database is located on the same sheet, a different sheet, or a completely different workbook. In order to test the VLOOKUP formula we’re about to write, we first enter a valid item code into cell A11: Next, we move the active cell to the cell in which we want information retrieved from the database by VLOOKUP to be stored.  Interestingly, this is the step that most people get wrong.  To explain further:  We are about to create a VLOOKUP formula that will retrieve the description that corresponds to the item code in cell A11.  Where do we want this description put when we get it?  In cell B11, of course.  So that’s where we write the VLOOKUP formula – in cell B11. Select cell B11: We need to locate the list of all available functions that Excel has to offer, so that we can choose VLOOKUP and get some assistance in completing the formula.  This is found by first clicking the Formulas tab, and then clicking Insert Function:   A box appears that allows us to select any of the functions available in Excel.  To find the one we’re looking for, we could type a search term like “lookup” (because the function we’re interested in is a lookup function).  The system would return us a list of all lookup-related functions in Excel.  VLOOKUP is the second one in the list.  Select it an click OK… The Function Arguments box appears, prompting us for all the arguments (or parameters) needed in order to complete the VLOOKUP function.  You can think of this box as the function is asking us the following questions: What unique identifier are you looking up in the database? Where is the database? Which piece of information from the database, associated with the unique identifier, do you wish to have retrieved for you? The first three arguments are shown in bold, indicating that they are mandatory arguments (the VLOOKUP function is incomplete without them and will not return a valid value).  The fourth argument is not bold, meaning that it’s optional:   We will complete the arguments in order, top to bottom. The first argument we need to complete is the Lookup_value argument.  The function needs us to tell it where to find the unique identifier (the item code in this case) that it should be retuning the description of.  We must select the item code we entered earlier (in A11). Click on the selector icon to the right of the first argument: Then click once on the cell containing the item code (A11), and press Enter: The value of “A11” is inserted into the first argument. Now we need to enter a value for the Table_array argument.  In other words, we need to tell VLOOKUP where to find the database/list.  Click on the selector icon next to the second argument: Now locate the database/list and select the entire list – not including the header line.  The database is located on a separate worksheet, so we first click on that worksheet tab: Next we select the entire database, not including the header line: …and press Enter.  The range of cells that represents the database (in this case “’Product Database’!A2:D7”) is entered automatically for us into the second argument. Now we need to enter the third argument, Col_index_num.  We use this argument to specify to VLOOKUP which piece of information from the database, associate with our item code in A11, we wish to have returned to us.  In this particular example, we wish to have the item’s description returned to us.  If you look on the database worksheet, you’ll notice that the “Description” column is the second column in the database.  This means that we must enter a value of “2” into the Col_index_num box: It is important to note that that we are not entering a “2” here because the “Description” column is in the B column on that worksheet.  If the database happened to start in column K of the worksheet, we would still enter a “2” in this field. Finally, we need to decide whether to enter a value into the final VLOOKUP argument, Range_lookup.  This argument requires either a true or false value, or it should be left blank.  When using VLOOKUP with databases (as is true 90% of the time), then the way to decide what to put in this argument can be thought of as follows: If the first column of the database (the column that contains the unique identifiers) is sorted alphabetically/numerically in ascending order, then it’s possible to enter a value of true into this argument, or leave it blank. If the first column of the database is not sorted, or it’s sorted in descending order, then you must enter a value of false into this argument As the first column of our database is not sorted, we enter false into this argument: That’s it!  We’ve entered all the information required for VLOOKUP to return the value we need.  Click the OK button and notice that the description corresponding to item code “R99245” has been correctly entered into cell B11: The formula that was created for us looks like this: If we enter a different item code into cell A11, we will begin to see the power of the VLOOKUP function:  The description cell changes to match the new item code: We can perform a similar set of steps to get the item’s price returned into cell E11.  Note that the new formula must be created in cell E11.  The result will look like this: …and the formula will look like this: Note that the only difference between the two formulae is the third argument (Col_index_num) has changed from a “2” to a “3” (because we want data retrieved from the 3rd column in the database). If we decided to buy 2 of these items, we would enter a “2” into cell D11.  We would then enter a simple formula into cell F11 to get the line total: =D11*E11 …which looks like this… Completing the Invoice Template We’ve learned a lot about VLOOKUP so far.  In fact, we’ve learned all we’re going to learn in this article.  It’s important to note that VLOOKUP can be used in other circumstances besides databases.  This is less common, and may be covered in future How-To Geek articles. Our invoice template is not yet complete.  In order to complete it, we would do the following: We would remove the sample item code from cell A11 and the “2” from cell D11.  This will cause our newly created VLOOKUP formulae to display error messages: We can remedy this by judicious use of Excel’s IF() and ISBLANK() functions.  We change our formula from this…       =VLOOKUP(A11,’Product Database’!A2:D7,2,FALSE) …to this…       =IF(ISBLANK(A11),”",VLOOKUP(A11,’Product Database’!A2:D7,2,FALSE)) We would copy the formulas in cells B11, E11 and F11 down to the remainder of the item rows of the invoice.  Note that if we do this, the resulting formulas will no longer correctly refer to the database table.  We could fix this by changing the cell references for the database to absolute cell references.  Alternatively – and even better – we could create a range name for the entire product database (such as “Products”), and use this range name instead of the cell references.  The formula would change from this…       =IF(ISBLANK(A11),”",VLOOKUP(A11,’Product Database’!A2:D7,2,FALSE)) …to this…       =IF(ISBLANK(A11),”",VLOOKUP(A11,Products,2,FALSE)) …and then copy the formulas down to the rest of the invoice item rows. We would probably “lock” the cells that contain our formulae (or rather unlock the other cells), and then protect the worksheet, in order to ensure that our carefully constructed formulae are not accidentally overwritten when someone comes to fill in the invoice. We would save the file as a template, so that it could be reused by everyone in our company If we were feeling really clever, we would create a database of all our customers in another worksheet, and then use the customer ID entered in cell F5 to automatically fill in the customer’s name and address in cells B6, B7 and B8. If you would like to practice with VLOOKUP, or simply see our resulting Invoice Template, it can be downloaded from here. Similar Articles Productive Geek Tips Make Excel 2007 Print Gridlines In Workbook FileMake Excel 2007 Always Save in Excel 2003 FormatConvert Older Excel Documents to Excel 2007 FormatImport Microsoft Access Data Into ExcelChange the Default Font in Excel 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data

    Read the article

  • SOA Community Newsletter: nouvelle lettre !

    - by mseika
    SOA PARTNER COMMUNITY NEWSLETTERAUGUST 2012 Dear SOA partner community member Have you submitted your feedback on SOA Partner Community Survey 2012? This is the last chance to participate in the survey. We recommend you to complete the survey and help us to improve our SOA Community. Thanks to all attendees and trainers for their participation in the excellent Fusion Middleware Summer Camps held in Lisbon and Munich. I would also like to thank you for the great feedback and the nice reports provided by AMIS Technology Blog & Middleware by Link Consulting. Most of our courses have been overbooked, if you did not get a chance or missed it, we offer a wide range of online training and the course material. Key take-away from the advanced BPM course is to become an expert in ADF. Here is the course from Grant Ronald Learn Advanced ADF online available. The Link Consulting Team became experts in SOA Governance with EAMS and Oracle Enterprise Repository! We always encourage our community members to share their best practices and are very keen to publish it. Please let us know if you want to share your best practices through this medium.We encourage you to make use of the Specialization benefits - this month we are giving an opportunity to Promote Your SOA & BPM Events. Jürgen KressOracle SOA & BPM Partner Adoption EMEA NEW CONTENT Presentations & Training material OFM Summer CampsPromote Your SOA & BPM Events Advanced ADF Online, For Free By Grant BPM 11g Customer Stories & Solution Catalog & Process Accelerators Delivering SOA Governance with EAMS by Link Consulting Team WebLogic Server Provisioning and Patching News from our Partners & CommunityUpdated material by Oracle Connect and Network SOA Blogs SOA on Facebook SOA on LinkedIn SOA on Twitter Mix SOA Forum SOA Workspace PRESENTATIONS & TRAINING MATERIAL OFM SUMMER CAMPS Thanks to all attendees who invested their time and utilized the opportunity to attend the Summer Camps! Due to high demand of our most of the trainings, we had a long waiting list with more numbers of partners who are keen to attend it. We would like to give our special thanks to all trainers, who delivered excellent workshops! Most of the presentations and course material have been posted on our SOA Community Workspaceand WebLogic Community Workspace. You can access the content only if you are a registered community member. To register for the SOA Community please click here. You can register for the WebLogic Community here. To find out the first impressions of the event please visit our Facebook pages:www.facebook.com/WebLogicCommunity &www.facebook.com/soacommunity or Picasa AlbumThanks for the excellent blog posts from AMIS Technology Blog & Middleware by Link Consulting. Let us know if you published a twitter blog on@soacommunity & @wlscommunity. We will be pleased to publish it in our Newsletters. BPM Course Quotes “Its always easy, if you know, what you are doing” - Torsten Winterberg, Opitz“ The best ideas are the ideas from the best” - Filipe Sequeria, Primesoft “Best invest in the education in the last 12 months” - Richard Schaller, IPT “Practice best practice with the best instructor” - Graham Lamond Capgemini “If you have basic BPM knowledge, this is the course to really mater it” - Diogo Henriques Link Consulting “Very good trainers lot of work. Lot of fun as well” - Matthias Gris Workflow Factory “If you like to accelerate in Oracle come to the training to bring it all together” - Marcel van der Glind, Amis ADF Course Quotes "Excellent training, great opportunity to network!" - Frank Houweling, Amis "Lots of fun and good ideas" - Ana Santiago, GFI "Learn ADF, worth it Fusion Apps is the future" - Miguel Delgadillo, STO Consulting "The best way to learn Fusion Middleware from the #1" Alexandro Montantes, STO Consulting "Be advanced to to be the first” - Dimitar Petrov Fadata "Great opportunity to suck all the knowledge out of some very experienced product managers” - Wilfred von der Deijl, The Future Group WebLogic Course Quotes “Oracle trainings are the best” - Pedro Neto Novobas“ "Excellent training, well organized” - Pedro Antunh, Capgemini “This course dives you into Oracle WebLogic giving you a quick start on benefiting from Fusion Apps” - Leonardo Fernandes, Outsystems Additional Quotes “Thanks a lot again for organizing such a great and informative Summer Camp. Both training and networking were organized very professionally. I have gained tons of very useful Info, which will definitely help to increase quality of our future projects.” - Daniel Fasko fss-group.com I didn’t get the chance yesterday to thank you for a most enjoyable and thoroughly educational time I had in Munich over the last few days.” - Jeroen Bakker Ordina “Just to congratulate you on a great event, not only today but also in the previous days of training. As we know, a very good organization and, as a native Portuguese that knows Lisbon very good, a nice choice of places to visit. Looking forward to come again next year.” Pedro Miguel Neto, Novobase PROMOTE YOUR SOA & BPM EVENTS The Partner Event Publisher has just been made available to all SOA & BPM specialized partners in EMEA. Partners now have the opportunity to publish their events to theOracle.com/events site and spread the word on their upcoming live in-person and/or live webcast events. See the demo below and click here to read more information. ADVANCED ADF ONLINE, FOR FREE BY GRANT The second part of the advanced ADF online eCourse is Live now! This covers the advanced topics of region and region interaction as well as getting down and dirty with some of the layout features of ADF Faces, skinning and DVT components. The aim of this course is to give you a self-paced learning aid which covers the more advanced topics of ADF development. The content is developed by Product Management and our Curriculum development teams and is based on advanced training material we have been running internally for about 18 months. We will get started on the next chapter, but in the meantime, please have a look at chapters one and two. Back to top BPM 11G CUSTOMER STORIES & SOLUTION CATALOG & PROCESS ACCELERATORS Stories Everyone loves a good story on planning or implementing a BPM strategy. Everyone wants to hear how it was done before?, what worked?, what was achieved? If you have achieved success with BPM, we are very keen to hear your stories and examples of how your customers use it. We receive lots of requests from people who are thinking of using BPM to solve a specific problem or in combination with a specific technology to talk to someone who has done it before. These stories are invaluable. Drop down the details of anything you think is relevant with a bit of detail and we will follow up on it. As one good deed deserves another, we will do our best to give you stories if you need them to show that where you are going, others have treaded before. Send your stories to us using this e-mail link and we will share them among other like minded people. Solution Catalogue This summer, Oracle is launching a solution catalogue specifically intended for partners. If you have delivered a successful implementation in BPM and think it could be reused and applied again in a similar scenario in the same industry or in a similar environment, then we ware keen to know about it and will add it to the solution catalogue. The solution catalogue will showcase successful BPM solutions both inside and outside Oracle. Be in touch with us on this e-mail link and we will make sure to add your solution. Process AcceleratorsFinally if you have specific processes that you are expert on, you have implemented at a customer and you want to work with us on getting these productised, then we would love to know about it. The process accelerator programme is explained in the most recent SOA/BPM Community Newsletter but again feel free to contact us if you want to get involved. Good luck with BPM and let us know how we can help. Barry O'Reilly Director BPM [email protected] DELIVERING SOA GOVERNANCE WITH EAMS BY LINK CONSULTING TEAM In the last 12 years Link Consulting has been making its presence in specific areas such as Governance and Architecture, both in terms of practices and methodologies, products, know-how and technological expertise. The Enterprise Architecture Management System - Oracle Enterprise Edition (EAMS - OER Edition) is the result of this experience and combines the architecture management solution with OER in order to deliver a product specialized for SOA Governance that gathers the better of two worlds in solution that enables SOA Governance projects, initiatives and programs. Enterprise Architecture Management System Enterprise Architecture Management System (EAMS), is an automation based solution that enables the efficient management of Enterprise Architectures. The solution uses configured enterprise repositories and takes advantages of its features to provide automation capabilities to the users. EAMS provides capabilities to create/customize/analyze repository data, architectural blueprints, reports and analytic charts. Oracle Enterprise Repository Oracle Enterprise Repository (OER) is one of the major and central elements of the Oracle SOA Governance solution. Oracle Enterprise Repository provides the tools to manage and govern the metadata for any type of software asset, from business processes and services to patterns, frameworks, applications, components, and models. OER maps the relationships and inter-dependencies that connect those assets to improve impact analysis, promote and optimize their reuse, and measure their impact on the bottom line. It provides the visibility, feedback, controls, and analytics to keep your SOA on track to deliver business value. The intense focus on automation helps to overcome barriers to SOA adoption and streamline governance throughout the lifecycle. Core capabilities of the OER include: Asset Management Asset Lifecycle Management Usage Tracking Service Discovery Version Management Dependency Analysis Portfolio Management EAMS - OER Edition The solution takes the advantages and features from both products and combines them in a symbiotic tool that enhances the quality of SOA Governance Initiatives and Programs. EAMS is able to produce a vast number of outputs by combining its analytical engine, SOA-specific configurations and the assets in OER and other related tools, catalogs and repositories. The configurations encompass not only the extendable parametrization of the metadata but also fully configurable blueprints, PowerPoint reports, charts and queries. The SOA blueprints The solution comes with a set of predefined architectural representations that help the organization better perceive their SOA landscape. More blueprints can be easily created in order to accommodate the organizations needs in terms of detail, audience and metadata. Charts & Dashboards The solution encompasses a set of predefined charts and dashboards that promote a more agile way to control and explore the assets. Time Based Visualization All representations are time bound, and with EAMS - OER you can truly govern SOA with a complete view of the Past, Present and Future; The solution delivers Gap Analysis, a project oriented approach while taking into consideration the As-Was, As-Is an To-Be. Time based visualization differentiating factors: Extensive automation and maintenance of architectural representations Organization wide solution. Easy access and navigation to and between all architectural artifacts and representations. Flexible meta-model, customization and extensibility capabilities. Lifecycle management and enforcement of the time dimension over all the repository content. Profile based customization. Comprehensive visibility Architectural alignment Friendly and striking user interfaces For more information on EAMS visit us here. For more information on SOA visit us here. WEBLOGIC SERVER PROVISIONING AND PATCHING For access to the Oracle demo systems please visit OPN and talk to your Partner Expert.SOA Suite and BPM Suite runs on WebLogic! We are pleased to announce the availability of a WebLogic Server Management demo that showcases some of the key provisioning and patching capabilities of WebLogic Server Management Pack Enterprise Edition (EE). To learn more about these features - as well as other features of the pack - please visit the pack's saleskit page.Demo Highlights The demo showcases the following capabilities: Patching Oracle WebLogic Servers Standardizing WebLogic Server Patch Rollouts Creating a WebLogic Domain Provisioning Profile Cloning a WebLogic Domain from a Provisioning Profile Deploying a Java EE Application Scaling Out an Oracle WebLogic Cluster Demo Instructions Go to the DSS website for Oracle Partners. On the Standard Demo Launchpad page, under the “Software Lifecycle Automation” section, click on the link “EM Cloud Control 12c WLS Provisioning and Patching” (tagged as “NEW”). Specific demo launchpad page contains a link to the detailed demo script with instructions on how to show the demo.

    Read the article

  • CodePlex Daily Summary for Friday, June 10, 2011

    CodePlex Daily Summary for Friday, June 10, 2011Popular ReleasesZen4Sync, Orchestration and Test Load platform for SQL Server Merge Replication: Zen4Sync Report 1.0 - Excel Add-In: Zen4Sync Report is an Excel Add-In allowing you to generate reports based on the data generated by your Test Sessions. Choose a Zen4Sync Report release according to your version of Excel. The downloaded .zip archive contains all the resources needed to install the Zen4Sync Report Add-In for your version of Excel. For instrunctions on how to install, use and customize Zen4Sync Report, please see the Zen4Sync Report Guide.Candescent NUI: Candescent NUI Start Menu: This is the first version of the Candescent NUI Start Menu. There is currently only one item in the start menu by default (Windows Explorer). There is no user interface for configuration yet, but you can add programs yourself by adding lines to the file menu_config.csv. Please don't change the first line. Lines for programs have the following format: Name;Path Default Explorer;c:\windows\explorer.exe MyApp;c:\...\myapp.exe To show the menu, present your open hand to the kinect at a distan...Media Companion: MC 3.406b weekly: An minor issue was found with 3.406b, the fixed version is now posted.... Extract the entire archive to a folder which has user access rights, eg desktop, documents etc. Refer to the documentation on this site for the Installation & Setup Guide Important! If you find MC not displaying movie data properly, please try a 'movie rebuild' to reload the data from the nfo's into MC's cache. Fixes Movies Readded movie preference to rename invalid or scene nfo's to info extension Fix crash during ...SCCM Client Actions Tool: SCCM Client Actions Tool v0.5.1: SCCM Client Actions Tool v0.5.1 is currently the most stable version and includes all of the functionality requested so far. It comes with following changes since last version: Fixed an incorrect path to x64 client setup folder. It comes as a ZIP file that contains three files: ClientActionsTool.hta – The tool itself. Cmdkey.exe – command line tool for managing cached credentials. This is needed for alternate credentials feature when running the HTA on Windows XP. Cmdkey.exe is natively a...Windows Azure VM Assistant: AzureVMAssist V1.0.0.5: AzureVMAssist V1.0.0.5 (Debug) - Test Release VersionSizeOnDisk: 1.8.0.3: Fix (issue 317): Main window icon loading error on windows server 2003 32bit runing on x86 cpu. Bypass of the Microsoft windows comexception.SketchFlow Template for Windows Phone: SketchFlow Template for Windows Phone 7: Initial release. Please make sure you are using a version of Blend with SketchFlow enabled and have also installed the the Mango developer tools for Windows Phone. fastJSON: v1.8: - Silverlight 4.0+ support merged - RegisterCustomType() for user defined serialization/deserialization without changing the source code open closed principalNetOffice - The easiest way to use Office in .NET: NetOffice Release 0.9: Changes: - fix examples (include issue 16026) - add new examples - 32Bit/64Bit Walkthrough is now available in technical Documentation. Includes: - Runtime Binaries and Source Code for .NET Framework:......v2.0, v3.0, v3.5, v4.0 - Tutorials in C# and VB.Net:..............................................................COM Proxy Management, Events, etc. - Examples in C# and VB.Net:............................................................Excel, Word, Outlook, PowerPoint, Access - COMAddi...Reusable Library: V1.1.3: A collection of reusable abstractions for enterprise application developerClosedXML - The easy way to OpenXML: ClosedXML 0.54.0: New on this release: 1) Mayor performance improvements. 2) AdjustToContents now take into account the text rotation. 3) Fixed issues 6782, 6784, 6788HTML-IDEx: HTML-IDEx .15 ALPHA: This release fixes line counting a little bit and adds the masshighlight() sub, which highlights pasted and inserted code.AutoLoL: AutoLoL v2.0.3: - Improved summoner spells are now displayed - Fixed some of the startup errors people got - Double clicking an item selects it - Some usability changes that make using AutoLoL just a little easier - Bug fixes AutoLoL v2 is not an update, but an entirely new version! Please install to a different directory than AutoLoL v1Host Profiles: Host Profiles 1.0: Host Profiles 1.0 Release Quickly modify host file Automatically flush dnsVidCoder: 0.9.2: Updated to HandBrake 4024svn. This fixes problems with mpeg2 sources: corrupted previews, incorrect progress indicators and encodes that incorrectly report as failed. Fixed a problem that prevented target sizes above 2048 MB.SharePoint Search XSL Samples: SharePoint 2010 Samples: I have updated some of the samples from the 2007 release. These all work in SharePoint 2010. I removed the Pivot on File Extension because SharePoint 2010 search has refiners that perform the same function.AcDown????? - Anime&Comic Downloader: AcDown????? v3.0 Beta5: ??AcDown?????????????,??????????????,????、????。?????Acfun????? ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??v3.0 Beta5 ?????????? ???? ?? ???????? ???"????????"?? ????????????? ????????/???? ?? ???"????"??? ?? ??????????? ?? ?? ??????????? ?? ?????????????????? ??????????????????? ???????????????? ????????????Discussions???????? ????AcDown??????????????VFPX: GoFish 4 Beta 1: Current beta is Build 144 (released 2011-06-07 ) See the GoFish4 info page for details and video link: http://vfpx.codeplex.com/wikipage?title=GoFishSterling NoSQL OODB for .NET 4.0, Silverlight 4 and 5, and Windows Phone 7: Sterling OODB v1.5: Welcome to the Sterling 1.5 RTM. This version is backwards compatible without modification to the 1.4 beta. For the 1.0, you will need to upgrade your database. Please see this discussion for details. You must modify your 1.0 code for persistence. The 1.5 version defaults to an in-memory driver. To save to isolated storage or use one of the new mechanisms, see the available drivers and pass an instance of the appropriate one to your database (different databases may use different drivers). ...patterns & practices: Project Silk: Project Silk Community Drop 10 - June 3, 2011: Changes from previous drop: Many code changes: please see the readme.mht for details. New "Application Notifications" chapter. Updated "Server-Side Implementation" chapter. Guidance Chapters Ready for Review The Word documents for the chapters are included with the source code in addition to the CHM to help you provide feedback. The PDF is provided as a separate download for your convenience. Installation Overview To install and run the reference implementation, you must perform the fol...New ProjectsAngry Apps: A game platform written on top of XNA Game Studio. The purpose of this project is to project a vanilla type Game project which can be used in many types of games.BLooD_ICQ: bloodicqCloud Fox: Cloud Fox is a Windows Phone application that allows you to view your Firefox Sync data on your mobile phone. It is similar to Firefox Home for iPhone. The current version will target phones running NoDo, but future versions will eventually require Mango. The application is developed in C# using Silverlight, Json.Net, Mvvm Light and Ninject.Configuration files Merger: This program is to help to merge different config files(environmental difference) and common config file into a single web.config/app.config. CRM 2011 Plugin Utilities: This project contains utilities from CRM 2011 plugins. Genera/calculate full name of a custom entity given the first, last and middle name.CUDA driver API: Making the CUDA driver API as simple to use as the runtime API. Almost.DBXMLTransfer: Command line application that: 1) extracts xml returned by a stored procedure to a file and 2) passes xml contained in a file to a stored procedure (which can use it for inserts & updates for example)Dot Generator: This is a project I did to generate numbers using the number of the dots in the number it self to get a visual representation of how big larg numbers areDRYlib.Net: DRY (Don't Repeat Yourself) -- or, in other words, code-reuse -- makes me create this Class Library so I don't have to keep creating the same code here and there. Feel free to use these snippets. I'm releasing them under a permissive license (Apache Public License 2.0). The DRYlib is created using Visual BASIC 2010 Express. What you can find in this DRYlib include, but not limited to: * CRC Hash algorithms * Simple, high-performance Integer extensions * Simple, oft-used Stri...DW.Configurations: DW.Configurations - Enables an easy handling of application settings Please see www.my.libraries.de for more information and documentation.DW.Game.MauMau: Its a MauMau game with the possibility to define all rules DW.Game.Sudoku: It's just a Sudoku gameDW.Interactivity: DW.Interactivity - Brings additional functionalities to WPF controls Please see www.my.libraries.de for more information and documentation.DW.Logging: DW.Logging - Supports an easy working with log files Please see www.my.libraries.de for more information and documentation.DW.Services: DW.Services - Brings standard services Please see www.my.libraries.de for more information and documentation.DW.SharpTools: DW.SharpTools - Brings additional possibilities to C# Please see www.my.libraries.de for more information and documentation.DW.UnitTests: DW.UnitTests - Gives some objects for easy UnitTests Please see www.my.libraries.de for more information and documentation.DW.WPFDev: DW.WPFDev - Some useful objects for developing custom controls and behaviors Please see www.my.libraries.de for more information and documentation.DW.WPFToolkit: DW.WPFToolkit - A custom controls library Please see www.my.libraries.de for more information and documentation.Elucidate: A GUI to drive the SnapRAID command line (All supported platforms): Definition: explain in detail Synonyms: annotate, clarify, clear, clear up, decode, demonstrate, enlighten, exemplify, explicate, expound, get across, illuminate, illustrate, interpret, make perfectly clear This will take on the task of creating a SnapRAID GUI to drive it's command line options, but give a little help and clarification (And logging) to guide the novice user.FixWordProperties for Office 2003, 2007 and 2010: Ken Getz originally wrote the FixWordProperties I believe in 2006. I had a a few extra requirements like the ability to unlock locked files, without passwords, using the office interop model, instead of word and a few more things that I needed in the winter of 2007.Information System Alumni Community: Here is our Internet Programming project. This site help alumni to always keep interact with their friends through this site.They can track his friend name, city, occupation and then interact with them to (whoa..like Social Network right!!).Go check this out int main code samples: Repository for all demos/samples posted at http://blog.r2d2rigo.es/english/LarX - XNA Game Engine: LarX is an XNA Game Engine, 2D and 3D, that uses SunBurn for Rendering, sgMotion for animations, and BEPU for Physics. It enables developers to write quicly AAA games.Membership, Role and Profile Providers for develop, debug, test: These ASP.NET providers for Membership, Roles and Profiles are valuable during development, debugging and testing due to the ease of creating, removing and changing users, user roles, etc. Music search engine: Ilovethismusic.com lets you listen & download music based on your mood, weather, or any type of expression you can think of. Just press Play.NicolasLight: This is for business projectOlympic.Magazine: sport supplements e-shop projectProject Obscura: Project Obscura is a game about to be in development by a group of friends, more data soon...RegularExpressionTest: .SalesManagementSystem: SalesManagementSystemShoozla: Very simple and powerful tool to search for missing covers. It finds album covers automatically and periodically. It uses LASTFM web service (website registration needed). WPF Application developed in C# following a MVVM design pattern.Silverlight Mind Map demo: A Mind Map control library and sample application created in Silveright. Although the library can be useful by itself, the main goal of this project is to server as a demo and reference application for Wiki that shows different Silverlight testing techniques. SketchFlow Template for Windows Phone: The SketchFlow Template for Windows Phone adds a new SketchFlow template for Expression Blend users, making it fast and easy to prototype Windows Phone applications.Smart Blog: Smart Blog ????,??ASP.NET??,?????Entity Framework、MVC 3.0(razor)、WCF???。???????SqlCE?????,????????,????、??。 SQLiteManager (sys_27): SQLiteManager makes for manage SQLite Database. It's developed in C#. (WPF and use MVVM-patern)testing access to TFS: This is a just a trivial set of test files to learn about TFS. I am testing each step of this process and will try to document it for others. Maybe this is obvious to others, but I am still learning TFS.TFS NuGetter: NuGetter is a TFS 2010 Build Activity designed to provide packaging and deployment management to projects destined for a NuGet Gallery.

    Read the article

  • Flow-Design Cheat Sheet &ndash; Part I, Notation

    - by Ralf Westphal
    You want to avoid the pitfalls of object oriented design? Then this is the right place to start. Use Flow-Oriented Analysis (FOA) and –Design (FOD or just FD for Flow-Design) to understand a problem domain and design a software solution. Flow-Orientation as described here is related to Flow-Based Programming, Event-Based Programming, Business Process Modelling, and even Event-Driven Architectures. But even though “thinking in flows” is not new, I found it helpful to deviate from those precursors for several reasons. Some aim at too big systems for the average programmer, some are concerned with only asynchronous processing, some are even not very much concerned with programming at all. What I was looking for was a design method to help in software projects of any size, be they large or tiny, involing synchronous or asynchronous processing, being local or distributed, running on the web or on the desktop or on a smartphone. That´s why I took ideas from all of the above sources and some additional and came up with Event-Based Components which later got repositioned and renamed to Flow-Design. In the meantime this has generated some discussion (in the German developer community) and several teams have started to work with Flow-Design. Also I´ve conducted quite some trainings using Flow-Orientation for design. The results are very promising. Developers find it much easier to design software using Flow-Orientation than OOAD-based object orientation. Since Flow-Orientation is moving fast and is not covered completely by a single source like a book, demand has increased for at least an overview of the current state of its notation. This page is trying to answer this demand by briefly introducing/describing every notational element as well as their translation into C# source code. Take this as a cheat sheet to put next to your whiteboard when designing software. However, please do not expect any explanation as to the reasons behind Flow-Design elements. Details on why Flow-Design at all and why in this specific way you´ll find in the literature covering the topic. Here´s a resource page on Flow-Design/Event-Based Components, if you´re able to read German. Notation Connected Functional Units The basic element of any FOD are functional units (FU): Think of FUs as some kind of software code block processing data. For the moment forget about classes, methods, “components”, assemblies or whatever. See a FU as an abstract piece of code. Software then consists of just collaborating FUs. I´m using circles/ellipses to draw FUs. But if you like, use rectangles. Whatever suites your whiteboard needs best.   The purpose of FUs is to process input and produce output. FUs are transformational. However, FUs are not called and do not call other FUs. There is no dependency between FUs. Data just flows into a FU (input) and out of it (output). From where and where to is of no concern to a FU.   This way FUs can be concatenated in arbitrary ways:   Each FU can accept input from many sources and produce output for many sinks:   Flows Connected FUs form a flow with a start and an end. Data is entering a flow at a source, and it´s leaving it through a sink. Think of sources and sinks as special FUs which conntect wires to the environment of a network of FUs.   Wiring Details Data is flowing into/out of FUs through wires. This is to allude to electrical engineering which since long has been working with composable parts. Wires are attached to FUs usings pins. They are the entry/exit points for the data flowing along the wires. Input-/output pins currently need not be drawn explicitly. This is to keep designing on a whiteboard simple and quick.   Data flowing is of some type, so wires have a type attached to them. And pins have names. If there is only one input pin and output pin on a FU, though, you don´t need to mention them. The default is Process for a single input pin, and Result for a single output pin. But you´re free to give even single pins different names.   There is a shortcut in use to address a certain pin on a destination FU:   The type of the wire is put in parantheses for two reasons. 1. This way a “no-type” wire can be easily denoted, 2. this is a natural way to describe tuples of data.   To describe how much data is flowing, a star can be put next to the wire type:   Nesting – Boards and Parts If more than 5 to 10 FUs need to be put in a flow a FD starts to become hard to understand. To keep diagrams clutter free they can be nested. You can turn any FU into a flow: This leads to Flow-Designs with different levels of abstraction. A in the above illustration is a high level functional unit, A.1 and A.2 are lower level functional units. One of the purposes of Flow-Design is to be able to describe systems on different levels of abstraction and thus make it easier to understand them. Humans use abstraction/decomposition to get a grip on complexity. Flow-Design strives to support this and make levels of abstraction first class citizens for programming. You can read the above illustration like this: Functional units A.1 and A.2 detail what A is supposed to do. The whole of A´s responsibility is decomposed into smaller responsibilities A.1 and A.2. FU A thus does not do anything itself anymore! All A is responsible for is actually accomplished by the collaboration between A.1 and A.2. Since A now is not doing anything anymore except containing A.1 and A.2 functional units are devided into two categories: boards and parts. Boards are just containing other functional units; their sole responsibility is to wire them up. A is a board. Boards thus depend on the functional units nested within them. This dependency is not of a functional nature, though. Boards are not dependent on services provided by nested functional units. They are just concerned with their interface to be able to plug them together. Parts are the workhorses of flows. They contain the real domain logic. They actually transform input into output. However, they do not depend on other functional units. Please note the usage of source and sink in boards. They correspond to input-pins and output-pins of the board.   Implicit Dependencies Nesting functional units leads to a dependency tree. Boards depend on nested functional units, they are the inner nodes of the tree. Parts are independent, they are the leafs: Even though dependencies are the bane of software development, Flow-Design does not usually draw these dependencies. They are implicitly created by visually nesting functional units. And they are harmless. Boards are so simple in their functionality, they are little affected by changes in functional units they are depending on. But functional units are implicitly dependent on more than nested functional units. They are also dependent on the data types of the wires attached to them: This is also natural and thus does not need to be made explicit. And it pertains mainly to parts being dependent. Since boards don´t do anything with regard to a problem domain, they don´t care much about data types. Their infrastructural purpose just needs types of input/output-pins to match.   Explicit Dependencies You could say, Flow-Orientation is about tackling complexity at its root cause: that´s dependencies. “Natural” dependencies are depicted naturally, i.e. implicitly. And whereever possible dependencies are not even created. Functional units don´t know their collaborators within a flow. This is core to Flow-Orientation. That makes for high composability of functional units. A part is as independent of other functional units as a motor is from the rest of the car. And a board is as dependend on nested functional units as a motor is on a spark plug or a crank shaft. With Flow-Design software development moves closer to how hardware is constructed. Implicit dependencies are not enough, though. Sometimes explicit dependencies make designs easier – as counterintuitive this might sound. So FD notation needs a ways to denote explicit dependencies: Data flows along wires. But data does not flow along dependency relations. Instead dependency relations represent service calls. Functional unit C is depending on/calling services on functional unit S. If you want to be more specific, name the services next to the dependency relation: Although you should try to stay clear of explicit dependencies, they are fundamentally ok. See them as a way to add another dimension to a flow. Usually the functionality of the independent FU (“Customer repository” above) is orthogonal to the domain of the flow it is referenced by. If you like emphasize this by using different shapes for dependent and independent FUs like above. Such dependencies can be used to link in resources like databases or shared in-memory state. FUs can not only produce output but also can have side effects. A common pattern for using such explizit dependencies is to hook a GUI into a flow as the source and/or the sink of data: Which can be shortened to: Treat FUs others depend on as boards (with a special non-FD API the dependent part is connected to), but do not embed them in a flow in the diagram they are depended upon.   Attributes of Functional Units Creation and usage of functional units can be modified with attributes. So far the following have shown to be helpful: Singleton: FUs are by default multitons. FUs in the same of different flows with the same name refer to the same functionality, but to different instances. Think of functional units as objects that get instanciated anew whereever they appear in a design. Sometimes though it´s helpful to reuse the same instance of a functional unit; this is always due to valuable state it holds. Signify this by annotating the FU with a “(S)”. Multiton: FUs on which others depend are singletons by default. This is, because they usually are introduced where shared state comes into play. If you want to change them to be a singletons mark them with a “(M)”. Configurable: Some parts need to be configured before the can do they work in a flow. Annotate them with a “(C)” to have them initialized before any data items to be processed by them arrive. Do not assume any order in which FUs are configured. How such configuration is happening is an implementation detail. Entry point: In each design there needs to be a single part where “it all starts”. That´s the entry point for all processing. It´s like Program.Main() in C# programs. Mark the entry point part with an “(E)”. Quite often this will be the GUI part. How the entry point is started is an implementation detail. Just consider it the first FU to start do its job.   Patterns / Standard Parts If more than a single wire is attached to an output-pin that´s called a split (or fork). The same data is flowing on all of the wires. Remember: Flow-Designs are synchronous by default. So a split does not mean data is processed in parallel afterwards. Processing still happens synchronously and thus one branch after another. Do not assume any specific order of the processing on the different branches after the split.   It is common to do a split and let only parts of the original data flow on through the branches. This effectively means a map is needed after a split. This map can be implicit or explicit.   Although FUs can have multiple input-pins it is preferrable in most cases to combine input data from different branches using an explicit join: The default output of a join is a tuple of its input values. The default behavior of a join is to output a value whenever a new input is received. However, to produce its first output a join needs an input for all its input-pins. Other join behaviors can be: reset all inputs after an output only produce output if data arrives on certain input-pins

    Read the article

< Previous Page | 35 36 37 38 39 40 41  | Next Page >