Search Results

Search found 1608 results on 65 pages for 'declaration'.

Page 45/65 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Executing legacy MSBuild scripts in TFS 2010 Build

    - by Jakob Ehn
    When upgrading from TFS 2008 to TFS 2010, all builds are “upgraded” in the sense that a build definition with the same name is created, and it uses the UpgradeTemplate  build process template to execute the build. This template basically just runs MSBuild on the existing TFSBuild.proj file. The build definition contains a property called ConfigurationFolderPath that points to the TFSBuild.proj file. So, existing builds will run just fine after upgrade. But what if you want to use the new workflow functionality in TFS 2010 Build, but still have a lot of MSBuild scripts that maybe call custom MSBuild tasks that you don’t have the time to rewrite? Then one option is to keep these MSBuild scrips and call them from a TFS 2010 Build workflow. This can be done using the MSBuild workflow activity that is avaiable in the toolbox in the Team Foundation Build Activities section: This activity wraps the call to MSBuild.exe and has the following parameters: Most of these properties are only relevant when actually compiling projects, for example C# project files. When calling custom MSBuild project files, you should focus on these properties: Property Meaning Example CommandLineArguments Use this to send in/override MSBuild properties in your project “/p:MyProperty=SomeValue” or MSBuildArguments (this will let you define the arguments in the build definition or when queuing the build) LogFile Name of the log file where MSbuild will log the output “MyBuild.log” LogFileDropLocation Location of the log file BuildDetail.DropLocation + “\log” Project The project to execute SourcesDirectory + “\BuildExtensions.targets” ResponseFile The name of the MSBuild response file SourcesDirectory + “\BuildExtensions.rsp” Targets The target(s) to execute New String() {“Target1”, “Target2”} Verbosity Logging verbosity Microsoft.TeamFoundation.Build.Workflow.BuildVerbosity.Normal Integrating with Team Build   If your MSBuild scripts tries to use Team Build tasks, they will most likely fail with the above approach. For example, the following MSBuild project file tries to add a build step using the BuildStep task:   <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\TeamBuild\Microsoft.TeamFoundation.Build.targets" /> <Target Name="MyTarget"> <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Name="MyBuildStep" Message="My build step executed" Status="Succeeded"></BuildStep> </Target> </Project> When executing this file using the MSBuild activity, calling the MyTarget, it will fail with the following message: The "Microsoft.TeamFoundation.Build.Tasks.BuildStep" task could not be loaded from the assembly \PrivateAssemblies\Microsoft.TeamFoundation.Build.ProcessComponents.dll. Could not load file or assembly 'file:///D:\PrivateAssemblies\Microsoft.TeamFoundation.Build.ProcessComponents.dll' or one of its dependencies. The system cannot find the file specified. Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. You can see that the path to the ProcessComponents.dll is incomplete. This is because in the Microsoft.TeamFoundation.Build.targets file the task is referenced using the $(TeamBuildRegPath) property. Also note that the task needs the TeamFounationServerUrl and BuildUri properties. One solution here is to pass these properties in using the Command Line Arguments parameter:   Here we pass in the parameters with the corresponding values from the curent build. The build log shows that the build step has in fact been inserted:   The problem as you probably spted is that the build step is insert at the top of the build log, instead of next to the MSBuild activity call. This is because we are using a legacy team build task (BuildStep), and that is how these are handled in TFS 2010. You can see the same behaviour when running builds that are using the UpgradeTemplate, that cutom build steps shows up at the top of the build log.

    Read the article

  • SharpDX: best practice for multiple RenderForms?

    - by Rob Jellinghaus
    I have an XNA app, but I really need to add multiple render windows, which XNA doesn't do. I'm looking at SharpDX (both for multi-window support and for DX11 / Metro / many other reasons). I decided to hack up the SharpDX DX11 MultiCubeTexture sample to see if I could make it work. My changes are pretty trivial. The original sample had: [STAThread] private static void Main() { var form = new RenderForm("SharpDX - MiniCubeTexture Direct3D11 Sample"); ... I changed this to: struct RenderFormWithActions { internal readonly RenderForm Form; // should just be Action but it's not in System namespace?! internal readonly Action RenderAction; internal readonly Action DisposeAction; internal RenderFormWithActions(RenderForm form, Action renderAction, Action disposeAction) { Form = form; RenderAction = renderAction; DisposeAction = disposeAction; } } [STAThread] private static void Main() { // hackity hack new Thread(new ThreadStart(() = { RenderFormWithActions form1 = CreateRenderForm(); RenderLoop.Run(form1.Form, () = form1.RenderAction(0)); form1.DisposeAction(0); })).Start(); new Thread(new ThreadStart(() = { RenderFormWithActions form2 = CreateRenderForm(); RenderLoop.Run(form2.Form, () = form2.RenderAction(0)); form2.DisposeAction(0); })).Start(); } private static RenderFormWithActions CreateRenderForm() { var form = new RenderForm("SharpDX - MiniCubeTexture Direct3D11 Sample"); ... Basically, I split out all the Main() code into a separate method which creates a RenderForm and two delegates (a render delegate, and a dispose delegate), and bundles them all together into a struct. I call this method twice, each time from a separate, new thread. Then I just have one RenderLoop on each new thread. I was thinking this wouldn't work because of the [STAThread] declaration -- I thought I would need to create the RenderForm on the main (STA) thread, and run only a single RenderLoop on that thread. Fortunately, it seems I was wrong. This works quite well -- if you drag one of the forms around, it stops rendering while being dragged, but starts again when you drop it; and the other form keeps chugging away. My questions are pretty basic: Is this a reasonable approach, or is there some lurking threading issue that might make trouble? My code simply duplicates all the setup code -- it makes a duplicate SwapChain, Device, Texture2D, vertex buffer, everything. I don't have a problem with this level of duplication -- my app is not intensive enough to suffer resource issues -- but nonetheless, is there a better practice? Is there any good reference for which DirectX structures can safely be shared, and which can't? It appears that RenderLoop.Run calls the render delegate in a tight loop. Is there any standard way to limit the frame rate of RenderLoop.Run, if you don't want a 400FPS app eating 100% of your CPU? Should I just Thread.Sleep(30) in the render delegate? (I asked on the sharpdx.org forums as well, but Alexandre is on vacation for two weeks, and my sister wants me to do a performance with my app at her wedding in three and a half weeks, so I'm mighty incented here! http://robjsoftware.org for details of what I'm building....)

    Read the article

  • JavaOne Tutorial Report - JavaFX 2 – A Java Developer’s Guide

    - by Janice J. Heiss
    Oracle Java Technology Evangelist Stephen Chin and Independent Consultant Peter Pilgrim presented a tutorial session intended to help developers get a handle on JavaFX 2. Stephen Chin, a Java Champion, is co-author of the Pro JavaFX Platform 2, while Java Champion Peter Pilgrim is an independent consultant who works out of London.NightHacking with Stephen ChinBefore discussing the tutorial, a note about Chin’s “NightHacking Tour,” wherein from 10/29/12 to 11/11/12, he will be traveling across Europe via motorcycle stopping at JUGs and interviewing Java developers and offering live video streaming of the journey. As he says, “Along the way, I will visit user groups, interviewing interesting folks, and hack on open source projects. The last stop will be the Devoxx conference in Belgium.”It’s a dirty job but someone’s got to do it. His trip will take him from the UK through the Netherlands, Germany, Switzerland, Italy, France, and finally to Devoxx in Belgium. He has interviews lined up with Ben Evans, Trisha Gee, Stephen Coulebourne, Martijn Verburg, Simon Ritter, Bert Ertman, Tony Epple, Adam Bien, Michael Hutterman, Sven Reimers, Andres Almiray, Gerrit Grunewald, Bertrand Boetzmann, Luc Duponcheel, Stephen Janssen, Cheryl Miller, and Andrew Phillips. If you expect to be in Chin’s vicinity at the end of October and in early November, by all means get in touch with him at his site and add your perspective. The more the merrier! Taking the JavaFX PlungeNow to the business at hand. The “JavaFX 2 – A Java Developer’s Guide” tutorial introduced Java developers to the JavaFX 2 platform from the perspective of seasoned Java developers. It demonstrated the breadth of the JavaFX APIs through examples that are built out in the course of the session in an effort to present the basic requirements in using JavaFX to build rich internet applications. Chin began with a quote from Oracle’s Christopher Oliver, the creator of F3, the original version of JavaFX, on the importance of GUIs:“At the end of the day, on the one hand we have computer systems, and on the other, people. Connecting them together, and allowing people to interact with computer systems in a compelling way, requires graphical user interfaces.”Chin explained that JavaFX is about producing an immersive application experience that involves cross-platform animation, video and charting. It can integrate Java, JavaScript and HTML in the same application. The new graphics stack takes advantage of hardware acceleration for 2D and 3D applications. In addition, we can integrate Swing applications using JFXPanel.He reminded attendees that they were building JavaFX apps using pure Java APIs that included builders for declarative construction; in addition, alternative languages can be used for simpler UI creation. In addition, developers can call upon alternative languages such as GroovyFX, ScalaFX and Visage, if they want simpler UI creation. He presented the fundamentals of JavaFX 2.0: properties, lists and binding and then explored primitive, object and FX list collection properties. Properties in JavaFX are observable, lazy and type safe. He then provided an example of property declaration in code.  Pilgrim and Chin explained the architectural structure of JavaFX 2 and its basic properties:JavaFX 2.0 properties – Primitive, Object, and FX List Collection properties. * Primitive Properties* Object Properties* FX List Collection Properties* Properties are:– Observable– Lazy– Type SafeChin and Pilgrim then took attendees through several participatory demos and got deep into the weeds of the code for the two-hour session. At the end, everyone knew a lot more about the inner workings of JavaFX 2.0.

    Read the article

  • Optional Parameters and Named Arguments in C# 4 (and a cool scenario w/ ASP.NET MVC 2)

    - by ScottGu
    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] This is the seventeenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release. Today’s post covers two new language feature being added to C# 4.0 – optional parameters and named arguments – as well as a cool way you can take advantage of optional parameters (both in VB and C#) with ASP.NET MVC 2. Optional Parameters in C# 4.0 C# 4.0 now supports using optional parameters with methods, constructors, and indexers (note: VB has supported optional parameters for awhile). Parameters are optional when a default value is specified as part of a declaration.  For example, the method below takes two parameters – a “category” string parameter, and a “pageIndex” integer parameter.  The “pageIndex” parameter has a default value of 0, and as such is an optional parameter: When calling the above method we can explicitly pass two parameters to it: Or we can omit passing the second optional parameter – in which case the default value of 0 will be passed:   Note that VS 2010’s Intellisense indicates when a parameter is optional, as well as what its default value is when statement completion is displayed: Named Arguments and Optional Parameters in C# 4.0 C# 4.0 also now supports the concept of “named arguments”.  This allows you to explicitly name an argument you are passing to a method – instead of just identifying it by argument position.  For example, I could write the code below to explicitly identify the second argument passed to the GetProductsByCategory method by name (making its usage a little more explicit): Named arguments come in very useful when a method supports multiple optional parameters, and you want to specify which arguments you are passing.  For example, below we have a method DoSomething that takes two optional parameters: We could use named arguments to call the above method in any of the below ways: Because both parameters are optional, in cases where only one (or zero) parameters is specified then the default value for any non-specified arguments is passed. ASP.NET MVC 2 and Optional Parameters One nice usage scenario where we can now take advantage of the optional parameter support of VB and C# is with ASP.NET MVC 2’s input binding support to Action methods on Controller classes. For example, consider a scenario where we want to map URLs like “Products/Browse/Beverages” or “Products/Browse/Deserts” to a controller action method.  We could do this by writing a URL routing rule that maps the URLs to a method like so: We could then optionally use a “page” querystring value to indicate whether or not the results displayed by the Browse method should be paged – and if so which page of the results should be displayed.  For example: /Products/Browse/Beverages?page=2. With ASP.NET MVC 1 you would typically handle this scenario by adding a “page” parameter to the action method and make it a nullable int (which means it will be null if the “page” querystring value is not present).  You could then write code like below to convert the nullable int to an int – and assign it a default value if it was not present in the querystring: With ASP.NET MVC 2 you can now take advantage of the optional parameter support in VB and C# to express this behavior more concisely and clearly.  Simply declare the action method parameter as an optional parameter with a default value: C# VB If the “page” value is present in the querystring (e.g. /Products/Browse/Beverages?page=22) then it will be passed to the action method as an integer.  If the “page” value is not in the querystring (e.g. /Products/Browse/Beverages) then the default value of 0 will be passed to the action method.  This makes the code a little more concise and readable. Summary There are a bunch of great new language features coming to both C# and VB with VS 2010.  The above two features (optional parameters and named parameters) are but two of them.  I’ll blog about more in the weeks and months ahead. If you are looking for a good book that summarizes all the language features in C# (including C# 4.0), as well provides a nice summary of the core .NET class libraries, you might also want to check out the newly released C# 4.0 in a Nutshell book from O’Reilly: It does a very nice job of packing a lot of content in an easy to search and find samples format. Hope this helps, Scott

    Read the article

  • Silverlight Recruiting Application Part 5 - Jobs Module / View

    Now we starting getting into a more code-heavy portion of this series, thankfully though this means the groundwork is all set for the most part and after adding the modules we will have a complete application that can be provided with full source. The Jobs module will have two concerns- adding and maintaining jobs that can then be broadcast out to the website. How they are displayed on the site will be handled by our admin system (which will just poll from this common database), so we aren't too concerned with that, but rather with getting the information into the system and allowing the backend administration/HR users to keep things up to date. Since there is a fair bit of information that we want to display, we're going to move editing to a separate view so we can get all that information in an easy-to-use spot. With all the files created for this module, the project looks something like this: And now... on to the code. XAML for the Job Posting View All we really need for the Job Posting View is a RadGridView and a few buttons. This will let us both show off records and perform operations on the records without much hassle. That XAML is going to look something like this: 01.<Grid x:Name="LayoutRoot" 02.Background="White"> 03.<Grid.RowDefinitions> 04.<RowDefinition Height="30" /> 05.<RowDefinition /> 06.</Grid.RowDefinitions> 07.<StackPanel Orientation="Horizontal"> 08.<Button x:Name="xAddRecordButton" 09.Content="Add Job" 10.Width="120" 11.cal:Click.Command="{Binding AddRecord}" 12.telerik:StyleManager.Theme="Windows7" /> 13.<Button x:Name="xEditRecordButton" 14.Content="Edit Job" 15.Width="120" 16.cal:Click.Command="{Binding EditRecord}" 17.telerik:StyleManager.Theme="Windows7" /> 18.</StackPanel> 19.<telerikGrid:RadGridView x:Name="xJobsGrid" 20.Grid.Row="1" 21.IsReadOnly="True" 22.AutoGenerateColumns="False" 23.ColumnWidth="*" 24.RowDetailsVisibilityMode="VisibleWhenSelected" 25.ItemsSource="{Binding MyJobs}" 26.SelectedItem="{Binding SelectedJob, Mode=TwoWay}" 27.command:SelectedItemChangedEventClass.Command="{Binding SelectedItemChanged}"> 28.<telerikGrid:RadGridView.Columns> 29.<telerikGrid:GridViewDataColumn Header="Job Title" 30.DataMemberBinding="{Binding JobTitle}" 31.UniqueName="JobTitle" /> 32.<telerikGrid:GridViewDataColumn Header="Location" 33.DataMemberBinding="{Binding Location}" 34.UniqueName="Location" /> 35.<telerikGrid:GridViewDataColumn Header="Resume Required" 36.DataMemberBinding="{Binding NeedsResume}" 37.UniqueName="NeedsResume" /> 38.<telerikGrid:GridViewDataColumn Header="CV Required" 39.DataMemberBinding="{Binding NeedsCV}" 40.UniqueName="NeedsCV" /> 41.<telerikGrid:GridViewDataColumn Header="Overview Required" 42.DataMemberBinding="{Binding NeedsOverview}" 43.UniqueName="NeedsOverview" /> 44.<telerikGrid:GridViewDataColumn Header="Active" 45.DataMemberBinding="{Binding IsActive}" 46.UniqueName="IsActive" /> 47.</telerikGrid:RadGridView.Columns> 48.</telerikGrid:RadGridView> 49.</Grid> I'll explain what's happening here by line numbers: Lines 11 and 16: Using the same type of click commands as we saw in the Menu module, we tie the button clicks to delegate commands in the viewmodel. Line 25: The source for the jobs will be a collection in the viewmodel. Line 26: We also bind the selected item to a public property from the viewmodel for use in code. Line 27: We've turned the event into a command so we can handle it via code in the viewmodel. So those first three probably make sense to you as far as Silverlight/WPF binding magic is concerned, but for line 27... This actually comes from something I read onDamien Schenkelman's blog back in the day for creating an attached behavior from any event. So, any time you see me using command:Whatever.Command, the backing for it is actually something like this: SelectedItemChangedEventBehavior.cs: 01.public class SelectedItemChangedEventBehavior : CommandBehaviorBase<Telerik.Windows.Controls.DataControl> 02.{ 03.public SelectedItemChangedEventBehavior(DataControl element) 04.: base(element) 05.{ 06.element.SelectionChanged += new EventHandler<SelectionChangeEventArgs>(element_SelectionChanged); 07.} 08.void element_SelectionChanged(object sender, SelectionChangeEventArgs e) 09.{ 10.// We'll only ever allow single selection, so will only need item index 0 11.base.CommandParameter = e.AddedItems[0]; 12.base.ExecuteCommand(); 13.} 14.} SelectedItemChangedEventClass.cs: 01.public class SelectedItemChangedEventClass 02.{ 03.#region The Command Stuff 04.public static ICommand GetCommand(DependencyObject obj) 05.{ 06.return (ICommand)obj.GetValue(CommandProperty); 07.} 08.public static void SetCommand(DependencyObject obj, ICommand value) 09.{ 10.obj.SetValue(CommandProperty, value); 11.} 12.public static readonly DependencyProperty CommandProperty = 13.DependencyProperty.RegisterAttached("Command", typeof(ICommand), 14.typeof(SelectedItemChangedEventClass), new PropertyMetadata(OnSetCommandCallback)); 15.public static void OnSetCommandCallback(DependencyObject dependencyObject, DependencyPropertyChangedEventArgs e) 16.{ 17.DataControl element = dependencyObject as DataControl; 18.if (element != null) 19.{ 20.SelectedItemChangedEventBehavior behavior = GetOrCreateBehavior(element); 21.behavior.Command = e.NewValue as ICommand; 22.} 23.} 24.#endregion 25.public static SelectedItemChangedEventBehavior GetOrCreateBehavior(DataControl element) 26.{ 27.SelectedItemChangedEventBehavior behavior = element.GetValue(SelectedItemChangedEventBehaviorProperty) as SelectedItemChangedEventBehavior; 28.if (behavior == null) 29.{ 30.behavior = new SelectedItemChangedEventBehavior(element); 31.element.SetValue(SelectedItemChangedEventBehaviorProperty, behavior); 32.} 33.return behavior; 34.} 35.public static SelectedItemChangedEventBehavior GetSelectedItemChangedEventBehavior(DependencyObject obj) 36.{ 37.return (SelectedItemChangedEventBehavior)obj.GetValue(SelectedItemChangedEventBehaviorProperty); 38.} 39.public static void SetSelectedItemChangedEventBehavior(DependencyObject obj, SelectedItemChangedEventBehavior value) 40.{ 41.obj.SetValue(SelectedItemChangedEventBehaviorProperty, value); 42.} 43.public static readonly DependencyProperty SelectedItemChangedEventBehaviorProperty = 44.DependencyProperty.RegisterAttached("SelectedItemChangedEventBehavior", 45.typeof(SelectedItemChangedEventBehavior), typeof(SelectedItemChangedEventClass), null); 46.} These end up looking very similar from command to command, but in a nutshell you create a command based on any event, determine what the parameter for it will be, then execute. It attaches via XAML and ties to a DelegateCommand in the viewmodel, so you get the full event experience (since some controls get a bit event-rich for added functionality). Simple enough, right? Viewmodel for the Job Posting View The Viewmodel is going to need to handle all events going back and forth, maintaining interactions with the data we are using, and both publishing and subscribing to events. Rather than breaking this into tons of little pieces, I'll give you a nice view of the entire viewmodel and then hit up the important points line-by-line: 001.public class JobPostingViewModel : ViewModelBase 002.{ 003.private readonly IEventAggregator eventAggregator; 004.private readonly IRegionManager regionManager; 005.public DelegateCommand<object> AddRecord { get; set; } 006.public DelegateCommand<object> EditRecord { get; set; } 007.public DelegateCommand<object> SelectedItemChanged { get; set; } 008.public RecruitingContext context; 009.private QueryableCollectionView _myJobs; 010.public QueryableCollectionView MyJobs 011.{ 012.get { return _myJobs; } 013.} 014.private QueryableCollectionView _selectionJobActionHistory; 015.public QueryableCollectionView SelectedJobActionHistory 016.{ 017.get { return _selectionJobActionHistory; } 018.} 019.private JobPosting _selectedJob; 020.public JobPosting SelectedJob 021.{ 022.get { return _selectedJob; } 023.set 024.{ 025.if (value != _selectedJob) 026.{ 027._selectedJob = value; 028.NotifyChanged("SelectedJob"); 029.} 030.} 031.} 032.public SubscriptionToken editToken = new SubscriptionToken(); 033.public SubscriptionToken addToken = new SubscriptionToken(); 034.public JobPostingViewModel(IEventAggregator eventAgg, IRegionManager regionmanager) 035.{ 036.// set Unity items 037.this.eventAggregator = eventAgg; 038.this.regionManager = regionmanager; 039.// load our context 040.context = new RecruitingContext(); 041.this._myJobs = new QueryableCollectionView(context.JobPostings); 042.context.Load(context.GetJobPostingsQuery()); 043.// set command events 044.this.AddRecord = new DelegateCommand<object>(this.AddNewRecord); 045.this.EditRecord = new DelegateCommand<object>(this.EditExistingRecord); 046.this.SelectedItemChanged = new DelegateCommand<object>(this.SelectedRecordChanged); 047.SetSubscriptions(); 048.} 049.#region DelegateCommands from View 050.public void AddNewRecord(object obj) 051.{ 052.this.eventAggregator.GetEvent<AddJobEvent>().Publish(true); 053.} 054.public void EditExistingRecord(object obj) 055.{ 056.if (_selectedJob == null) 057.{ 058.this.eventAggregator.GetEvent<NotifyUserEvent>().Publish("No job selected."); 059.} 060.else 061.{ 062.this._myJobs.EditItem(this._selectedJob); 063.this.eventAggregator.GetEvent<EditJobEvent>().Publish(this._selectedJob); 064.} 065.} 066.public void SelectedRecordChanged(object obj) 067.{ 068.if (obj.GetType() == typeof(ActionHistory)) 069.{ 070.// event bubbles up so we don't catch items from the ActionHistory grid 071.} 072.else 073.{ 074.JobPosting job = obj as JobPosting; 075.GrabHistory(job.PostingID); 076.} 077.} 078.#endregion 079.#region Subscription Declaration and Events 080.public void SetSubscriptions() 081.{ 082.EditJobCompleteEvent editComplete = eventAggregator.GetEvent<EditJobCompleteEvent>(); 083.if (editToken != null) 084.editComplete.Unsubscribe(editToken); 085.editToken = editComplete.Subscribe(this.EditCompleteEventHandler); 086.AddJobCompleteEvent addComplete = eventAggregator.GetEvent<AddJobCompleteEvent>(); 087.if (addToken != null) 088.addComplete.Unsubscribe(addToken); 089.addToken = addComplete.Subscribe(this.AddCompleteEventHandler); 090.} 091.public void EditCompleteEventHandler(bool complete) 092.{ 093.if (complete) 094.{ 095.JobPosting thisJob = _myJobs.CurrentEditItem as JobPosting; 096.this._myJobs.CommitEdit(); 097.this.context.SubmitChanges((s) => 098.{ 099.ActionHistory myAction = new ActionHistory(); 100.myAction.PostingID = thisJob.PostingID; 101.myAction.Description = String.Format("Job '{0}' has been edited by {1}", thisJob.JobTitle, "default user"); 102.myAction.TimeStamp = DateTime.Now; 103.eventAggregator.GetEvent<AddActionEvent>().Publish(myAction); 104.} 105., null); 106.} 107.else 108.{ 109.this._myJobs.CancelEdit(); 110.} 111.this.MakeMeActive(this.regionManager, "MainRegion", "JobPostingsView"); 112.} 113.public void AddCompleteEventHandler(JobPosting job) 114.{ 115.if (job == null) 116.{ 117.// do nothing, new job add cancelled 118.} 119.else 120.{ 121.this.context.JobPostings.Add(job); 122.this.context.SubmitChanges((s) => 123.{ 124.ActionHistory myAction = new ActionHistory(); 125.myAction.PostingID = job.PostingID; 126.myAction.Description = String.Format("Job '{0}' has been added by {1}", job.JobTitle, "default user"); 127.myAction.TimeStamp = DateTime.Now; 128.eventAggregator.GetEvent<AddActionEvent>().Publish(myAction); 129.} 130., null); 131.} 132.this.MakeMeActive(this.regionManager, "MainRegion", "JobPostingsView"); 133.} 134.#endregion 135.public void GrabHistory(int postID) 136.{ 137.context.ActionHistories.Clear(); 138._selectionJobActionHistory = new QueryableCollectionView(context.ActionHistories); 139.context.Load(context.GetHistoryForJobQuery(postID)); 140.} Taking it from the top, we're injecting an Event Aggregator and Region Manager for use down the road and also have the public DelegateCommands (just like in the Menu module). We also grab a reference to our context, which we'll obviously need for data, then set up a few fields with public properties tied to them. We're also setting subscription tokens, which we have not yet seen but I will get into below. The AddNewRecord (50) and EditExistingRecord (54) methods should speak for themselves for functionality, the one thing of note is we're sending events off to the Event Aggregator which some module, somewhere will take care of. Since these aren't entirely relying on one another, the Jobs View doesn't care if anyone is listening, but it will publish AddJobEvent (52), NotifyUserEvent (58) and EditJobEvent (63)regardless. Don't mind the GrabHistory() method so much, that is just grabbing history items (visibly being created in the SubmitChanges callbacks), and adding them to the database. Every action will trigger a history event, so we'll know who modified what and when, just in case. ;) So where are we at? Well, if we click to Add a job, we publish an event, if we edit a job, we publish an event with the selected record (attained through the magic of binding). Where is this all going though? To the Viewmodel, of course! XAML for the AddEditJobView This is pretty straightforward except for one thing, noted below: 001.<Grid x:Name="LayoutRoot" 002.Background="White"> 003.<Grid x:Name="xEditGrid" 004.Margin="10" 005.validationHelper:ValidationScope.Errors="{Binding Errors}"> 006.<Grid.Background> 007.<LinearGradientBrush EndPoint="0.5,1" 008.StartPoint="0.5,0"> 009.<GradientStop Color="#FFC7C7C7" 010.Offset="0" /> 011.<GradientStop Color="#FFF6F3F3" 012.Offset="1" /> 013.</LinearGradientBrush> 014.</Grid.Background> 015.<Grid.RowDefinitions> 016.<RowDefinition Height="40" /> 017.<RowDefinition Height="40" /> 018.<RowDefinition Height="40" /> 019.<RowDefinition Height="100" /> 020.<RowDefinition Height="100" /> 021.<RowDefinition Height="100" /> 022.<RowDefinition Height="40" /> 023.<RowDefinition Height="40" /> 024.<RowDefinition Height="40" /> 025.</Grid.RowDefinitions> 026.<Grid.ColumnDefinitions> 027.<ColumnDefinition Width="150" /> 028.<ColumnDefinition Width="150" /> 029.<ColumnDefinition Width="300" /> 030.<ColumnDefinition Width="100" /> 031.</Grid.ColumnDefinitions> 032.<!-- Title --> 033.<TextBlock Margin="8" 034.Text="{Binding AddEditString}" 035.TextWrapping="Wrap" 036.Grid.Column="1" 037.Grid.ColumnSpan="2" 038.FontSize="16" /> 039.<!-- Data entry area--> 040. 041.<TextBlock Margin="8,0,0,0" 042.Style="{StaticResource LabelTxb}" 043.Grid.Row="1" 044.Text="Job Title" 045.VerticalAlignment="Center" /> 046.<TextBox x:Name="xJobTitleTB" 047.Margin="0,8" 048.Grid.Column="1" 049.Grid.Row="1" 050.Text="{Binding activeJob.JobTitle, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 051.Grid.ColumnSpan="2" /> 052.<TextBlock Margin="8,0,0,0" 053.Grid.Row="2" 054.Text="Location" 055.d:LayoutOverrides="Height" 056.VerticalAlignment="Center" /> 057.<TextBox x:Name="xLocationTB" 058.Margin="0,8" 059.Grid.Column="1" 060.Grid.Row="2" 061.Text="{Binding activeJob.Location, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 062.Grid.ColumnSpan="2" /> 063. 064.<TextBlock Margin="8,11,8,0" 065.Grid.Row="3" 066.Text="Description" 067.TextWrapping="Wrap" 068.VerticalAlignment="Top" /> 069. 070.<TextBox x:Name="xDescriptionTB" 071.Height="84" 072.TextWrapping="Wrap" 073.ScrollViewer.VerticalScrollBarVisibility="Auto" 074.Grid.Column="1" 075.Grid.Row="3" 076.Text="{Binding activeJob.Description, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 077.Grid.ColumnSpan="2" /> 078.<TextBlock Margin="8,11,8,0" 079.Grid.Row="4" 080.Text="Requirements" 081.TextWrapping="Wrap" 082.VerticalAlignment="Top" /> 083. 084.<TextBox x:Name="xRequirementsTB" 085.Height="84" 086.TextWrapping="Wrap" 087.ScrollViewer.VerticalScrollBarVisibility="Auto" 088.Grid.Column="1" 089.Grid.Row="4" 090.Text="{Binding activeJob.Requirements, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 091.Grid.ColumnSpan="2" /> 092.<TextBlock Margin="8,11,8,0" 093.Grid.Row="5" 094.Text="Qualifications" 095.TextWrapping="Wrap" 096.VerticalAlignment="Top" /> 097. 098.<TextBox x:Name="xQualificationsTB" 099.Height="84" 100.TextWrapping="Wrap" 101.ScrollViewer.VerticalScrollBarVisibility="Auto" 102.Grid.Column="1" 103.Grid.Row="5" 104.Text="{Binding activeJob.Qualifications, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 105.Grid.ColumnSpan="2" /> 106.<!-- Requirements Checkboxes--> 107. 108.<CheckBox x:Name="xResumeRequiredCB" Margin="8,8,8,15" 109.Content="Resume Required" 110.Grid.Row="6" 111.Grid.ColumnSpan="2" 112.IsChecked="{Binding activeJob.NeedsResume, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 113. 114.<CheckBox x:Name="xCoverletterRequiredCB" Margin="8,8,8,15" 115.Content="Cover Letter Required" 116.Grid.Column="2" 117.Grid.Row="6" 118.IsChecked="{Binding activeJob.NeedsCV, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 119. 120.<CheckBox x:Name="xOverviewRequiredCB" Margin="8,8,8,15" 121.Content="Overview Required" 122.Grid.Row="7" 123.Grid.ColumnSpan="2" 124.IsChecked="{Binding activeJob.NeedsOverview, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 125. 126.<CheckBox x:Name="xJobActiveCB" Margin="8,8,8,15" 127.Content="Job is Active" 128.Grid.Column="2" 129.Grid.Row="7" 130.IsChecked="{Binding activeJob.IsActive, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 131. 132.<!-- Buttons --> 133. 134.<Button x:Name="xAddEditButton" Margin="8,8,0,10" 135.Content="{Binding AddEditButtonString}" 136.cal:Click.Command="{Binding AddEditCommand}" 137.Grid.Column="2" 138.Grid.Row="8" 139.HorizontalAlignment="Left" 140.Width="125" 141.telerik:StyleManager.Theme="Windows7" /> 142. 143.<Button x:Name="xCancelButton" HorizontalAlignment="Right" 144.Content="Cancel" 145.cal:Click.Command="{Binding CancelCommand}" 146.Margin="0,8,8,10" 147.Width="125" 148.Grid.Column="2" 149.Grid.Row="8" 150.telerik:StyleManager.Theme="Windows7" /> 151.</Grid> 152.</Grid> The 'validationHelper:ValidationScope' line may seem odd. This is a handy little trick for catching current and would-be validation errors when working in this whole setup. This all comes from an approach found on theJoy Of Code blog, although it looks like the story for this will be changing slightly with new advances in SL4/WCF RIA Services, so this section can definitely get an overhaul a little down the road. The code is the fun part of all this, so let us see what's happening under the hood. Viewmodel for the AddEditJobView We are going to see some of the same things happening here, so I'll skip over the repeat info and get right to the good stuff: 001.public class AddEditJobViewModel : ViewModelBase 002.{ 003.private readonly IEventAggregator eventAggregator; 004.private readonly IRegionManager regionManager; 005. 006.public RecruitingContext context; 007. 008.private JobPosting _activeJob; 009.public JobPosting activeJob 010.{ 011.get { return _activeJob; } 012.set 013.{ 014.if (_activeJob != value) 015.{ 016._activeJob = value; 017.NotifyChanged("activeJob"); 018.} 019.} 020.} 021. 022.public bool isNewJob; 023. 024.private string _addEditString; 025.public string AddEditString 026.{ 027.get { return _addEditString; } 028.set 029.{ 030.if (_addEditString != value) 031.{ 032._addEditString = value; 033.NotifyChanged("AddEditString"); 034.} 035.} 036.} 037. 038.private string _addEditButtonString; 039.public string AddEditButtonString 040.{ 041.get { return _addEditButtonString; } 042.set 043.{ 044.if (_addEditButtonString != value) 045.{ 046._addEditButtonString = value; 047.NotifyChanged("AddEditButtonString"); 048.} 049.} 050.} 051. 052.public SubscriptionToken addJobToken = new SubscriptionToken(); 053.public SubscriptionToken editJobToken = new SubscriptionToken(); 054. 055.public DelegateCommand<object> AddEditCommand { get; set; } 056.public DelegateCommand<object> CancelCommand { get; set; } 057. 058.private ObservableCollection<ValidationError> _errors = new ObservableCollection<ValidationError>(); 059.public ObservableCollection<ValidationError> Errors 060.{ 061.get { return _errors; } 062.} 063. 064.private ObservableCollection<ValidationResult> _valResults = new ObservableCollection<ValidationResult>(); 065.public ObservableCollection<ValidationResult> ValResults 066.{ 067.get { return this._valResults; } 068.} 069. 070.public AddEditJobViewModel(IEventAggregator eventAgg, IRegionManager regionmanager) 071.{ 072.// set Unity items 073.this.eventAggregator = eventAgg; 074.this.regionManager = regionmanager; 075. 076.context = new RecruitingContext(); 077. 078.AddEditCommand = new DelegateCommand<object>(this.AddEditJobCommand); 079.CancelCommand = new DelegateCommand<object>(this.CancelAddEditCommand); 080. 081.SetSubscriptions(); 082.} 083. 084.#region Subscription Declaration and Events 085. 086.public void SetSubscriptions() 087.{ 088.AddJobEvent addJob = this.eventAggregator.GetEvent<AddJobEvent>(); 089. 090.if (addJobToken != null) 091.addJob.Unsubscribe(addJobToken); 092. 093.addJobToken = addJob.Subscribe(this.AddJobEventHandler); 094. 095.EditJobEvent editJob = this.eventAggregator.GetEvent<EditJobEvent>(); 096. 097.if (editJobToken != null) 098.editJob.Unsubscribe(editJobToken); 099. 100.editJobToken = editJob.Subscribe(this.EditJobEventHandler); 101.} 102. 103.public void AddJobEventHandler(bool isNew) 104.{ 105.this.activeJob = null; 106.this.activeJob = new JobPosting(); 107.this.activeJob.IsActive = true; // We assume that we want a new job to go up immediately 108.this.isNewJob = true; 109.this.AddEditString = "Add New Job Posting"; 110.this.AddEditButtonString = "Add Job"; 111. 112.MakeMeActive(this.regionManager, "MainRegion", "AddEditJobView"); 113.} 114. 115.public void EditJobEventHandler(JobPosting editJob) 116.{ 117.this.activeJob = null; 118.this.activeJob = editJob; 119.this.isNewJob = false; 120.this.AddEditString = "Edit Job Posting"; 121.this.AddEditButtonString = "Edit Job"; 122. 123.MakeMeActive(this.regionManager, "MainRegion", "AddEditJobView"); 124.} 125. 126.#endregion 127. 128.#region DelegateCommands from View 129. 130.public void AddEditJobCommand(object obj) 131.{ 132.if (this.Errors.Count > 0) 133.{ 134.List<string> errorMessages = new List<string>(); 135. 136.foreach (var valR in this.Errors) 137.{ 138.errorMessages.Add(valR.Exception.Message); 139.} 140. 141.this.eventAggregator.GetEvent<DisplayValidationErrorsEvent>().Publish(errorMessages); 142. 143.} 144.else if (!Validator.TryValidateObject(this.activeJob, new ValidationContext(this.activeJob, null, null), _valResults, true)) 145.{ 146.List<string> errorMessages = new List<string>(); 147. 148.foreach (var valR in this._valResults) 149.{ 150.errorMessages.Add(valR.ErrorMessage); 151.} 152. 153.this._valResults.Clear(); 154. 155.this.eventAggregator.GetEvent<DisplayValidationErrorsEvent>().Publish(errorMessages); 156.} 157.else 158.{ 159.if (this.isNewJob) 160.{ 161.this.eventAggregator.GetEvent<AddJobCompleteEvent>().Publish(this.activeJob); 162.} 163.else 164.{ 165.this.eventAggregator.GetEvent<EditJobCompleteEvent>().Publish(true); 166.} 167.} 168.} 169. 170.public void CancelAddEditCommand(object obj) 171.{ 172.if (this.isNewJob) 173.{ 174.this.eventAggregator.GetEvent<AddJobCompleteEvent>().Publish(null); 175.} 176.else 177.{ 178.this.eventAggregator.GetEvent<EditJobCompleteEvent>().Publish(false); 179.} 180.} 181. 182.#endregion 183.} 184.} We start seeing something new on line 103- the AddJobEventHandler will create a new job and set that to the activeJob item on the ViewModel. When this is all set, the view calls that familiar MakeMeActive method to activate itself. I made a bit of a management call on making views self-activate like this, but I figured it works for one reason. As I create this application, views may not exist that I have in mind, so after a view receives its 'ping' from being subscribed to an event, it prepares whatever it needs to do and then goes active. This way if I don't have 'edit' hooked up, I can click as the day is long on the main view and won't get lost in an empty region. Total personal preference here. :) Everything else should again be pretty straightforward, although I do a bit of validation checking in the AddEditJobCommand, which can either fire off an event back to the main view/viewmodel if everything is a success or sent a list of errors to our notification module, which pops open a RadWindow with the alerts if any exist. As a bonus side note, here's what my WCF RIA Services metadata looks like for handling all of the validation: private JobPostingMetadata() { } [StringLength(2500, ErrorMessage = "Description should be more than one and less than 2500 characters.", MinimumLength = 1)] [Required(ErrorMessage = "Description is required.")] public string Description; [Required(ErrorMessage="Active Status is Required")] public bool IsActive; [StringLength(100, ErrorMessage = "Posting title must be more than 3 but less than 100 characters.", MinimumLength = 3)] [Required(ErrorMessage = "Job Title is required.")] public bool JobTitle; [Required] public string Location; public bool NeedsCV; public bool NeedsOverview; public bool NeedsResume; public int PostingID; [Required(ErrorMessage="Qualifications are required.")] [StringLength(2500, ErrorMessage="Qualifications should be more than one and less than 2500 characters.", MinimumLength=1)] public string Qualifications; [StringLength(2500, ErrorMessage = "Requirements should be more than one and less than 2500 characters.", MinimumLength = 1)] [Required(ErrorMessage="Requirements are required.")] public string Requirements;   The RecruitCB Alternative See all that Xaml I pasted above? Those are now two pieces sitting in the JobsView.xaml file now. The only real difference is that the xEditGrid now sits in the same place as xJobsGrid, with visibility swapping out between the two for a quick switch. I also took out all the cal: and command: command references and replaced Button events with clicks and the Grid selection command replaced with a SelectedItemChanged event. Also, at the bottom of the xEditGrid after the last button, I add a ValidationSummary (with Visibility=Collapsed) to catch any errors that are popping up. Simple as can be, and leads to this being the single code-behind file: 001.public partial class JobsView : UserControl 002.{ 003.public RecruitingContext context; 004.public JobPosting activeJob; 005.public bool isNew; 006.private ObservableCollection<ValidationResult> _valResults = new ObservableCollection<ValidationResult>(); 007.public ObservableCollection<ValidationResult> ValResults 008.{ 009.get { return this._valResults; } 010.} 011.public JobsView() 012.{ 013.InitializeComponent(); 014.this.Loaded += new RoutedEventHandler(JobsView_Loaded); 015.} 016.void JobsView_Loaded(object sender, RoutedEventArgs e) 017.{ 018.context = new RecruitingContext(); 019.xJobsGrid.ItemsSource = context.JobPostings; 020.context.Load(context.GetJobPostingsQuery()); 021.} 022.private void xAddRecordButton_Click(object sender, RoutedEventArgs e) 023.{ 024.activeJob = new JobPosting(); 025.isNew = true; 026.xAddEditTitle.Text = "Add a Job Posting"; 027.xAddEditButton.Content = "Add"; 028.xEditGrid.DataContext = activeJob; 029.HideJobsGrid(); 030.} 031.private void xEditRecordButton_Click(object sender, RoutedEventArgs e) 032.{ 033.activeJob = xJobsGrid.SelectedItem as JobPosting; 034.isNew = false; 035.xAddEditTitle.Text = "Edit a Job Posting"; 036.xAddEditButton.Content = "Edit"; 037.xEditGrid.DataContext = activeJob; 038.HideJobsGrid(); 039.} 040.private void xAddEditButton_Click(object sender, RoutedEventArgs e) 041.{ 042.if (!Validator.TryValidateObject(this.activeJob, new ValidationContext(this.activeJob, null, null), _valResults, true)) 043.{ 044.List<string> errorMessages = new List<string>(); 045.foreach (var valR in this._valResults) 046.{ 047.errorMessages.Add(valR.ErrorMessage); 048.} 049.this._valResults.Clear(); 050.ShowErrors(errorMessages); 051.} 052.else if (xSummary.Errors.Count > 0) 053.{ 054.List<string> errorMessages = new List<string>(); 055.foreach (var err in xSummary.Errors) 056.{ 057.errorMessages.Add(err.Message); 058.} 059.ShowErrors(errorMessages); 060.} 061.else 062.{ 063.if (this.isNew) 064.{ 065.context.JobPostings.Add(activeJob); 066.context.SubmitChanges((s) => 067.{ 068.ActionHistory thisAction = new ActionHistory(); 069.thisAction.PostingID = activeJob.PostingID; 070.thisAction.Description = String.Format("Job '{0}' has been edited by {1}", activeJob.JobTitle, "default user"); 071.thisAction.TimeStamp = DateTime.Now; 072.context.ActionHistories.Add(thisAction); 073.context.SubmitChanges(); 074.}, null); 075.} 076.else 077.{ 078.context.SubmitChanges((s) => 079.{ 080.ActionHistory thisAction = new ActionHistory(); 081.thisAction.PostingID = activeJob.PostingID; 082.thisAction.Description = String.Format("Job '{0}' has been added by {1}", activeJob.JobTitle, "default user"); 083.thisAction.TimeStamp = DateTime.Now; 084.context.ActionHistories.Add(thisAction); 085.context.SubmitChanges(); 086.}, null); 087.} 088.ShowJobsGrid(); 089.} 090.} 091.private void xCancelButton_Click(object sender, RoutedEventArgs e) 092.{ 093.ShowJobsGrid(); 094.} 095.private void ShowJobsGrid() 096.{ 097.xAddEditRecordButtonPanel.Visibility = Visibility.Visible; 098.xEditGrid.Visibility = Visibility.Collapsed; 099.xJobsGrid.Visibility = Visibility.Visible; 100.} 101.private void HideJobsGrid() 102.{ 103.xAddEditRecordButtonPanel.Visibility = Visibility.Collapsed; 104.xJobsGrid.Visibility = Visibility.Collapsed; 105.xEditGrid.Visibility = Visibility.Visible; 106.} 107.private void ShowErrors(List<string> errorList) 108.{ 109.string nm = "Errors received: \n"; 110.foreach (string anerror in errorList) 111.nm += anerror + "\n"; 112.RadWindow.Alert(nm); 113.} 114.} The first 39 lines should be pretty familiar, not doing anything too unorthodox to get this up and running. Once we hit the xAddEditButton_Click on line 40, we're still doing pretty much the same things except instead of checking the ValidationHelper errors, we both run a check on the current activeJob object as well as check the ValidationSummary errors list. Once that is set, we again use the callback of context.SubmitChanges (lines 68 and 78) to create an ActionHistory which we will use to track these items down the line. That's all? Essentially... yes. If you look back through this post, most of the code and adventures we have taken were just to get things working in the MVVM/Prism setup. Since I have the whole 'module' self-contained in a single JobView+code-behind setup, I don't have to worry about things like sending events off into space for someone to pick up, communicating through an Infrastructure project, or even re-inventing events to be used with attached behaviors. Everything just kinda works, and again with much less code. Here's a picture of the MVVM and Code-behind versions on the Jobs and AddEdit views, but since the functionality is the same in both apps you still cannot tell them apart (for two-strike): Looking ahead, the Applicants module is effectively the same thing as the Jobs module, so most of the code is being cut-and-pasted back and forth with minor tweaks here and there. So that one is being taken care of by me behind the scenes. Next time, we get into a new world of fun- the interview scheduling module, which will pull from available jobs and applicants for each interview being scheduled, tying everything together with RadScheduler to the rescue. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Amazon Product Advertising API SOAP Namespace Changes

    - by Rick Strahl
    About two months ago (twowards the end of February 2012 I think) Amazon decided to change the namespace of the Product Advertising API. The error that would come up was: <ItemSearchResponse > was not expected. If you've used the Amazon Product Advertising API you probably know that Amazon has made it a habit to break the services every few years or so and I guess last month was about the time for another one. Basically the service namespace of the document has been changed and responses from the service just failed outright even though the rest of the schema looks fine. Now I looked around for a while trying to find a recent update to the Product Advertising API - something semi-official looking but everything is dated around 2009. Really??? And it's not just .NET - the newest thing on the sample/APIs is dated early 2011 and a handful of 2010 samples. There are newer full APIs for the 'cloud' offerings, but the Product Advertising API apparently isn't part of that. After searching for quite a bit trying to trace this down myself and trying some of the newer samples (which also failed) I found an obscure forum post that describes the solution of getting past the namespace issue. FWIW, I've been using an old version of the Product Advertising API using the old Microsoft WSE3 services (pre-WCF), which provides some of the WS* security features required by the Amazon service. The fix for this code is to explicitly override the namespace declaration on each of the imported service method signatures. The old service namespace (at least on my build) was: http://webservices.amazon.com/AWSECommerceService/2009-03-31 and it should be changed to: http://webservices.amazon.com/AWSECommerceService/2011-08-01 Change it on the class header:[Microsoft.Web.Services3.Messaging.SoapService("http://webservices.amazon.com/AWSECommerceService/2011-08-01")] [System.Xml.Serialization.XmlIncludeAttribute(typeof(Property[]))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(BrowseNode[]))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(TransactionItem[]))] public partial class AWSECommerceService : Microsoft.Web.Services3.Messaging.SoapClient { and on all method signatures:[Microsoft.Web.Services3.Messaging.SoapMethodAttribute("http://soap.amazon.com/ItemSearch")] [return: System.Xml.Serialization.XmlElementAttribute("ItemSearchResponse", Namespace="http://webservices.amazon.com/AWSECommerceService/2011-08-01")] public ItemSearchResponse ItemSearch(ItemSearch ItemSearch1) { Microsoft.Web.Services3.SoapEnvelope results = base.SendRequestResponse("ItemSearch", ItemSearch1); return ((ItemSearchResponse)(results.GetBodyObject(typeof(ItemSearchResponse), this.SoapServiceAttribute.TargetNamespace))); } It's easy to do with a Search and Replace on the above strings. Amazon Services <rant> FWIW, I've not been impressed by Amazon's service offerings. While the services work well, their documentation and tool support is absolutely horrendous. I was recently working with a customer on an old AWS application and their old API had been completely removed with a new API that wasn't even a close match. One old API call resulted in requiring three different APIs to perform the same functionality. We had to re-write the entire piece from scratch essentially. The documentation was downright wrong, and incomplete and so scattered it was next to impossible to follow. The examples weren't examples at all - they're mockups of real service calls with fake data that didn't even provide everything that was required to make same service calls work. Additionally there appears to be just about no public support from Amazon, only peer support which is sparse at best - and getting a hold of somebody at Amazon, even for pay seems to be mythical task. It's a terrible business model they have going. I can't see why anybody would put themselves through this sort of customer and development experience. Sad really, but an experience we see more and more these days. Nobody puts in the time to document anything anymore, leaving it to devs to figure this stuff out over and over again… </rant>© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  Web Services   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Type Casting variables in PHP: Is there a practical example?

    - by Stephen
    PHP, as most of us know, has weak typing. For those who don't, PHP.net says: PHP does not require (or support) explicit type definition in variable declaration; a variable's type is determined by the context in which the variable is used. Love it or hate it, PHP re-casts variables on-the-fly. So, the following code is valid: $var = "10"; $value = 10 + $var; var_dump($value); // int(20) PHP also alows you to explicitly cast a variable, like so: $var = "10"; $value = 10 + $var; $value = (string)$value; var_dump($value); // string(2) "20" That's all cool... but, for the life of me, I cannot conceive of a practical reason for doing this. I don't have a problem with strong typing in languages that support it, like Java. That's fine, and I completely understand it. Also, I'm aware of—and fully understand the usefulness of—type hinting in function parameters. The problem I have with type casting is explained by the above quote. If PHP can swap types at-will, it can do so even after you force cast a type; and it can do so on-the-fly when you need a certain type in an operation. That makes the following valid: $var = "10"; $value = (int)$var; $value = $value . ' TaDa!'; var_dump($value); // string(8) "10 TaDa!" So what's the point? Can anyone show me a practical application or example of type casting—one that would fail if type casting were not involved? I ask this here instead of SO because I figure practicality is too subjective. Edit in response to Chris' comment Take this theoretical example of a world where user-defined type casting makes sense in PHP: You force cast variable $foo as int -- (int)$foo. You attempt to store a string value in the variable $foo. PHP throws an exception!! <--- That would make sense. Suddenly the reason for user defined type casting exists! The fact that PHP will switch things around as needed makes the point of user defined type casting vague. For example, the following two code samples are equivalent: // example 1 $foo = 0; $foo = (string)$foo; $foo = '# of Reasons for the programmer to type cast $foo as a string: ' . $foo; // example 2 $foo = 0; $foo = (int)$foo; $foo = '# of Reasons for the programmer to type cast $foo as a string: ' . $foo; UPDATE Guess who found himself using typecasting in a practical environment? Yours Truly. The requirement was to display money values on a website for a restaurant menu. The design of the site required that trailing zeros be trimmed, so that the display looked something like the following: Menu Item 1 .............. $ 4 Menu Item 2 .............. $ 7.5 Menu Item 3 .............. $ 3 The best way I found to do that wast to cast the variable as a float: $price = '7.50'; // a string from the database layer. echo 'Menu Item 2 .............. $ ' . (float)$price; PHP trims the float's trailing zeros, and then recasts the float as a string for concatenation.

    Read the article

  • C# 4.0: COM Interop Improvements

    - by Paulo Morgado
    Dynamic resolution as well as named and optional arguments greatly improve the experience of interoperating with COM APIs such as Office Automation Primary Interop Assemblies (PIAs). But, in order to alleviate even more COM Interop development, a few COM-specific features were also added to C# 4.0. Ommiting ref Because of a different programming model, many COM APIs contain a lot of reference parameters. These parameters are typically not meant to mutate a passed-in argument, but are simply another way of passing value parameters. Specifically for COM methods, the compiler allows to declare the method call passing the arguments by value and will automatically generate the necessary temporary variables to hold the values in order to pass them by reference and will discard their values after the call returns. From the point of view of the programmer, the arguments are being passed by value. This method call: object fileName = "Test.docx"; object missing = Missing.Value; document.SaveAs(ref fileName, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing); can now be written like this: document.SaveAs("Test.docx", Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value); And because all parameters that are receiving the Missing.Value value have that value as its default value, the declaration of the method call can even be reduced to this: document.SaveAs("Test.docx"); Dynamic Import Many COM methods accept and return variant types, which are represented in the PIAs as object. In the vast majority of cases, a programmer calling these methods already knows the static type of a returned object form the context of the call, but has to explicitly perform a cast on the returned values to make use of that knowledge. These casts are so common that they constitute a major nuisance. To make the developer’s life easier, it is now possible to import the COM APIs in such a way that variants are instead represented using the type dynamic which means that COM signatures have now occurrences of dynamic instead of object. This means that members of a returned object can now be easily accessed or assigned into a strongly typed variable without having to cast. Instead of this code: ((Excel.Range)(excel.Cells[1, 1])).Value2 = "Hello World!"; this code can now be used: excel.Cells[1, 1] = "Hello World!"; And instead of this: Excel.Range range = (Excel.Range)(excel.Cells[1, 1]); this can be used: Excel.Range range = excel.Cells[1, 1]; Indexed And Default Properties A few COM interface features are still not available in C#. On the top of the list are indexed properties and default properties. As mentioned above, these will be possible if the COM interface is accessed dynamically, but will not be recognized by statically typed C# code. No PIAs – Type Equivalence And Type Embedding For assemblies indentified with PrimaryInteropAssemblyAttribute, the compiler will create equivalent types (interfaces, structs, enumerations and delegates) and embed them in the generated assembly. To reduce the final size of the generated assembly, only the used types and their used members will be generated and embedded. Although this makes development and deployment of applications using the COM components easier because there’s no need to deploy the PIAs, COM component developers are still required to build the PIAs.

    Read the article

  • Subterranean IL: Custom modifiers

    - by Simon Cooper
    In IL, volatile is an instruction prefix used to set a memory barrier at that instruction. However, in C#, volatile is applied to a field to indicate that all accesses on that field should be prefixed with volatile. As I mentioned in my previous post, this means that the field definition needs to store this information somehow, as such a field could be accessed from another assembly. However, IL does not have a concept of a 'volatile field'. How is this information stored? Attributes The standard way of solving this is to apply a VolatileAttribute or similar to the field; this extra metadata notifies the C# compiler that all loads and stores to that field should use the volatile prefix. However, there is a problem with this approach, namely, the .NET C++ compiler. C++ allows methods to be overloaded using properties, like volatile or const, on the parameters; this is perfectly legal C++: public ref class VolatileMethods { void Method(int *i) {} void Method(volatile int *i) {} } If volatile was specified using a custom attribute, then the VolatileMethods class wouldn't be compilable to IL, as there is nothing to differentiate the two methods from each other. This is where custom modifiers come in. Custom modifiers Custom modifiers are similar to custom attributes, but instead of being applied to an IL element separately to its declaration, they are embedded within the field or parameter's type signature itself. The VolatileMethods class would be compiled to the following IL: .class public VolatileMethods { .method public instance void Method(int32* i) {} .method public instance void Method( int32 modreq( [mscorlib]System.Runtime.CompilerServices.IsVolatile)* i) {} } The modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile) is the custom modifier. This adds a TypeDef or TypeRef token to the signature of the field or parameter, and even though they are mostly ignored by the CLR when it's executing the program, this allows methods and fields to be overloaded in ways that wouldn't be allowed using attributes. Because the modifiers are part of the signature, they need to be fully specified when calling such a method in IL: call instance void Method( int32 modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile)*) There are two ways of applying modifiers; modreq specifies required modifiers (like IsVolatile), and modopt specifies optional modifiers that can be ignored by compilers (like IsLong or IsConst). The type specified as the modifier argument are simple placeholders; if you have a look at the definitions of IsVolatile and IsLong they are completely empty. They exist solely to be referenced by a modifier. Custom modifiers are used extensively by the C++ compiler to specify concepts that aren't expressible in IL, but still need to be taken into account when calling method overloads. C++ and C# That's all very well and good, but how does this affect C#? Well, the C++ compiler uses modreq(IsVolatile) to specify volatility on both method parameters and fields, as it would be slightly odd to have the same concept represented using a modifier or attribute depending on what it was applied to. Once you've compiled your C++ project, it can then be referenced and used from C#, so the C# compiler has to recognise the modreq(IsVolatile) custom modifier applied to fields, and vice versa. So, even though you can't overload fields or parameters with volatile using C#, volatile needs to be expressed using a custom modifier rather than an attribute to guarentee correct interoperability and behaviour with any C++ dlls that happen to come along. Next up: a closer look at attributes, and how certain attributes compile in unexpected ways.

    Read the article

  • New JavaScript Editor

    - by Petr
    I did not write a blog post here for a few weeks. I think the last my post was  about releasing NetBeans 7.1 in the beginning of January. The reason is not that I would change the job:), but that I have concentrated on new JavaScript support/editor. The new JavaScript editor is written basically from scratch. The answer for the question "Why from beginning again, why do you just improve the old one?" is not easy and the decision has more aspects. One of the main reasons is that the old support was written 4 years ago and the architecture is limited. Also during the time, the APIs were changed and it was very hard to keep the editor up to date. Also there is a license issue etc. In short, it is time to rewrite the old JS editor.  We build up strong community about the PHP support in NetBeans and because many PHP developers also write JavaScript code I would like to ask you for a help. There is a continual PHP build with the new JavaScript support. You can download the result of the builds here. It's a zip file. You can unzip the file anywhere, where you want. I recommend to run the build with the new userdir, to avoid damaging your current userdir. It shouldn't happened, but just to be sure:). You can achieve this through the switch --userdir. So start the unzipped file from command line from the folder, where you unzipped it, can be done with this command on unix: bin/netbeans.sh --userdir /path/to/new/userdir and on windows: bin\netbeans.exe --userdir D:\path\to\new\userdir For the developers who use continual php build already, it's well known. There is also full IDE build with the new JavaScript support for people, who need more than only PHP support.  Because the builds with the new JavaScript editor is created from a branch, there are not nightly builds available. They will be, when we merge the branch to the trunk, but so far we have to work only with the mentioned continual build. We will merge our branch after branching NetBeans 7.2 from trunk. This is also answer for the question, what release of NetBeans will contain the new JS support. It should be the release after NetBeans 7.2. I'm asking you whether you could play with the builds or better, could work in the builds with new JavaScript support and tell us every issue that you run in. It can be everything what doesn't fit you, something doesn't work as you expected, something is slow, you want change the behaviour of a feature etc. Your input / comments are very important for us and it will help us to achieve the new JavaScript support that you need.  The best way how to communicate issues is through our Bugzilla, because it is simple to track them. Sure you can write comment here:), but still I prefer Bugzilla for any issue. You can click here (you should be already log in Bugzilla), a form for the new JavaScript issue is opened, with pre-filled component Editor and NO72 keyword. I will write about the single features later, but now I will mentioned a few features that should work in better way than in the old support.  Syntactic and semantic colouring Navigator Mark Occurrences and GoTo Declaration  Code Completion Code Completion is invoked through keyboard shortcut CTRL+SPACE. The first invocation offers items that are found through a source model. Almost all editor features are based on the model, that is build from source code. There is a lot of work on the model yet, but it should offer better results. When the pop up window with code completion items is open and you press CTRL+SPACE again, then the code completion offers all elements that are in the project. In the pictures all elements that starts with letter 't'. Formatter with many options and more :) A few features are not still implemented that are supported in the old JavaScript support (for example jQuery support), but we are adding this features ASAP.

    Read the article

  • Anatomy of a .NET Assembly - Signature encodings

    - by Simon Cooper
    If you've just joined this series, I highly recommend you read the previous posts in this series, starting here, or at least these posts, covering the CLR metadata tables. Before we look at custom attribute encoding, we first need to have a brief look at how signatures are encoded in an assembly in general. Signature types There are several types of signatures in an assembly, all of which share a common base representation, and are all stored as binary blobs in the #Blob heap, referenced by an offset from various metadata tables. The types of signatures are: Method definition and method reference signatures. Field signatures Property signatures Method local variables. These are referenced from the StandAloneSig table, which is then referenced by method body headers. Generic type specifications. These represent a particular instantiation of a generic type. Generic method specifications. Similarly, these represent a particular instantiation of a generic method. All these signatures share the same underlying mechanism to represent a type Representing a type All metadata signatures are based around the ELEMENT_TYPE structure. This assigns a number to each 'built-in' type in the framework; for example, Uint16 is 0x07, String is 0x0e, and Object is 0x1c. Byte codes are also used to indicate SzArrays, multi-dimensional arrays, custom types, and generic type and method variables. However, these require some further information. Firstly, custom types (ie not one of the built-in types). These require you to specify the 4-byte TypeDefOrRef coded token after the CLASS (0x12) or VALUETYPE (0x11) element type. This 4-byte value is stored in a compressed format before being written out to disk (for more excruciating details, you can refer to the CLI specification). SzArrays simply have the array item type after the SZARRAY byte (0x1d). Multidimensional arrays follow the ARRAY element type with a series of compressed integers indicating the number of dimensions, and the size and lower bound of each dimension. Generic variables are simply followed by the index of the generic variable they refer to. There are other additions as well, for example, a specific byte value indicates a method parameter passed by reference (BYREF), and other values indicating custom modifiers. Some examples... To demonstrate, here's a few examples and what the resulting blobs in the #Blob heap will look like. Each name in capitals corresponds to a particular byte value in the ELEMENT_TYPE or CALLCONV structure, and coded tokens to custom types are represented by the type name in curly brackets. A simple field: int intField; FIELD I4 A field of an array of a generic type parameter (assuming T is the first generic parameter of the containing type): T[] genArrayField FIELD SZARRAY VAR 0 An instance method signature (note how the number of parameters does not include the return type): instance string MyMethod(MyType, int&, bool[][]); HASTHIS DEFAULT 3 STRING CLASS {MyType} BYREF I4 SZARRAY SZARRAY BOOLEAN A generic type instantiation: MyGenericType<MyType, MyStruct> GENERICINST CLASS {MyGenericType} 2 CLASS {MyType} VALUETYPE {MyStruct} For more complicated examples, in the following C# type declaration: GenericType<T> : GenericBaseType<object[], T, GenericType<T>> { ... } the Extends field of the TypeDef for GenericType will point to a TypeSpec with the following blob: GENERICINST CLASS {GenericBaseType} 3 SZARRAY OBJECT VAR 0 GENERICINST CLASS {GenericType} 1 VAR 0 And a static generic method signature (generic parameters on types are referenced using VAR, generic parameters on methods using MVAR): TResult[] GenericMethod<TInput, TResult>( TInput, System.Converter<TInput, TOutput>); GENERIC 2 2 SZARRAY MVAR 1 MVAR 0 GENERICINST CLASS {System.Converter} 2 MVAR 0 MVAR 1 As you can see, complicated signatures are recursively built up out of quite simple building blocks to represent all the possible variations in a .NET assembly. Now we've looked at the basics of normal method signatures, in my next post I'll look at custom attribute application signatures, and how they are different to normal signatures.

    Read the article

  • Crime Scene Investigation: SQL Server

    - by Rodney Landrum
    “The packages are running slower in Prod than they are in Dev” My week began with this simple declaration from one of our lead BI developers, quickly followed by an emailed spreadsheet demonstrating that, over 5 executions, an extensive ETL process was running average 630 seconds faster on Dev than on Prod. The situation needed some scientific investigation to determine why the same code, the same data, the same schema would yield consistently slower results on a more powerful server. Prod had yet to be officially christened with a “Go Live” date so I had the time, and having recently been binge watching CSI: New York, I also had the inclination. An inspection of the two systems, Prod and Dev, revealed the first surprise: although Prod was indeed a “bigger” system, with double the amount of RAM of Dev, the latter actually had twice as many processor cores. On neither system did I see much sign of resources being heavily taxed, while the ETL process was running. Without any real supporting evidence, I jumped to a conclusion that my years of performance tuning should have helped me avoid, and that was that the hardware differences explained the better performance on Dev. We spent time setting up a Test system, similarly scoped to Prod except with 4 times the cores, and ported everything across. The results of our careful benchmarks left us truly bemused; the ETL process on the new server was slower than on both other systems. We burned more time tweaking server configurations, monitoring IO and network latency, several times believing we’d uncovered the smoking gun, until the results of subsequent test runs pitched us back into confusion. Finally, I decided, enough was enough. Hadn’t I learned very early in my DBA career that almost all bottlenecks were caused by code and database design, not hardware? It was time to get back to basics. With over 100 SSIS packages and hundreds of queries, each handling specific tasks such as file loads, bulk inserts, transforms, logging, and so on, the task seemed formidable. And yet, after barely an hour spent with Profiler, Extended Events, and wait statistics DMVs, I had a lead in the shape of a query that joined three tables, containing millions of rows, returned 3279 results, but performed 239K logical reads. As soon as I looked at the execution plans for the query in Dev and Test I saw the culprit, an implicit conversion warning on a join predicate field that was numeric in one table and a varchar(50) in another! I turned this information over to the BI developers who quickly resolved the data type mismatches and found and fixed “several” others as well. After the schema changes the same query with the same databases ran in under 1 second on all systems and reduced the logical reads down to fewer than 300. The analysis also revealed that on Dev, the ETL task was pulling data across a LAN, whereas Prod and Test were connected across slower WAN, in large part explaining why the same process ran slower on the latter two systems. Loading the data locally on Prod delivered a further 20% gain in performance. As we progress through our DBA careers we learn valuable lessons. Sometimes, with a project deadline looming and pressure mounting, we choose to forget them. I was close to giving into the temptation to throw more hardware at the problem. I’m pleased at least that I resisted, though I still kick myself for not looking at the code on day one. It can seem a daunting prospect to return to the fundamentals of the code so close to roll out, but with the right tools, and surprisingly little time, you can collect the evidence that reveals the true problem. It is a lesson I trust I will remember for my next 20 years as a DBA, if I’m ever again tempted to bypass the evidence.

    Read the article

  • Strings in .NET are Enumerable

    - by Scott Dorman
    It seems like there is always some confusion concerning strings in .NET. This is both from developers who are new to the Framework and those that have been working with it for quite some time. Strings in the .NET Framework are represented by the System.String class, which encapsulates the data manipulation, sorting, and searching methods you most commonly perform on string data. In the .NET Framework, you can use System.String (which is the actual type name or the language alias (for C#, string). They are equivalent so use whichever naming convention you prefer but be consistent. Common usage (and my preference) is to use the language alias (string) when referring to the data type and String (the actual type name) when accessing the static members of the class. Many mainstream programming languages (like C and C++) treat strings as a null terminated array of characters. The .NET Framework, however, treats strings as an immutable sequence of Unicode characters which cannot be modified after it has been created. Because strings are immutable, all operations which modify the string contents are actually creating new string instances and returning those. They never modify the original string data. There is one important word in the preceding paragraph which many people tend to miss: sequence. In .NET, strings are treated as a sequence…in fact, they are treated as an enumerable sequence. This can be verified if you look at the class declaration for System.String, as seen below: // Summary:// Represents text as a series of Unicode characters.public sealed class String : IEnumerable, IComparable, IComparable<string>, IEquatable<string> The first interface that String implements is IEnumerable, which has the following definition: // Summary:// Exposes the enumerator, which supports a simple iteration over a non-generic// collection.public interface IEnumerable{ // Summary: // Returns an enumerator that iterates through a collection. // // Returns: // An System.Collections.IEnumerator object that can be used to iterate through // the collection. IEnumerator GetEnumerator();} As a side note, System.Array also implements IEnumerable. Why is that important to know? Simply put, it means that any operation you can perform on an array can also be performed on a string. This allows you to write code such as the following: string s = "The quick brown fox";foreach (var c in s){ System.Diagnostics.Debug.WriteLine(c);}for (int i = 0; i < s.Length; i++){ System.Diagnostics.Debug.WriteLine(s[i]);} If you executed those lines of code in a running application, you would see the following output in the Visual Studio Output window: In the case of a string, these enumerable or array operations return a char (System.Char) rather than a string. That might lead you to believe that you can get around the string immutability restriction by simply treating strings as an array and assigning a new character to a specific index location inside the string, like this: string s = "The quick brown fox";s[2] = 'a';   However, if you were to write such code, the compiler will promptly tell you that you can’t do it: This preserves the notion that strings are immutable and cannot be changed once they are created. (Incidentally, there is no built in way to replace a single character like this. It can be done but it would require converting the string to a character array, changing the appropriate indexed location, and then creating a new string.)

    Read the article

  • NHibernate Pitfalls: Custom Types and Detecting Changes

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. NHibernate supports the declaration of properties of user-defined types, that is, not entities, collections or primitive types. These are used for mapping a database columns, of any type, into a different type, which may not even be an entity; think, for example, of a custom user type that converts a BLOB column into an Image. User types must implement interface NHibernate.UserTypes.IUserType. This interface specifies an Equals method that is used for comparing two instances of the user type. If this method returns false, the entity is marked as dirty, and, when the session is flushed, will trigger an UPDATE. So, in your custom user type, you must implement this carefully so that it is not mistakenly considered changed. For example, you can cache the original column value inside of it, and compare it with the one in the other instance. Let’s see an example implementation of a custom user type that converts a Byte[] from a BLOB column into an Image: 1: [Serializable] 2: public sealed class ImageUserType : IUserType 3: { 4: private Byte[] data = null; 5: 6: public ImageUserType() 7: { 8: this.ImageFormat = ImageFormat.Png; 9: } 10: 11: public ImageFormat ImageFormat 12: { 13: get; 14: set; 15: } 16: 17: public Boolean IsMutable 18: { 19: get 20: { 21: return (true); 22: } 23: } 24: 25: public Object Assemble(Object cached, Object owner) 26: { 27: return (cached); 28: } 29: 30: public Object DeepCopy(Object value) 31: { 32: return (value); 33: } 34: 35: public Object Disassemble(Object value) 36: { 37: return (value); 38: } 39: 40: public new Boolean Equals(Object x, Object y) 41: { 42: return (Object.Equals(x, y)); 43: } 44: 45: public Int32 GetHashCode(Object x) 46: { 47: return ((x != null) ? x.GetHashCode() : 0); 48: } 49: 50: public override Int32 GetHashCode() 51: { 52: return ((this.data != null) ? this.data.GetHashCode() : 0); 53: } 54: 55: public override Boolean Equals(Object obj) 56: { 57: ImageUserType other = obj as ImageUserType; 58: 59: if (other == null) 60: { 61: return (false); 62: } 63: 64: if (Object.ReferenceEquals(this, other) == true) 65: { 66: return (true); 67: } 68: 69: return (this.data.SequenceEqual(other.data)); 70: } 71: 72: public Object NullSafeGet(IDataReader rs, String[] names, Object owner) 73: { 74: Int32 index = rs.GetOrdinal(names[0]); 75: Byte[] data = rs.GetValue(index) as Byte[]; 76: 77: this.data = data as Byte[]; 78: 79: if (data == null) 80: { 81: return (null); 82: } 83: 84: using (MemoryStream stream = new MemoryStream(this.data ?? new Byte[0])) 85: { 86: return (Image.FromStream(stream)); 87: } 88: } 89: 90: public void NullSafeSet(IDbCommand cmd, Object value, Int32 index) 91: { 92: if (value != null) 93: { 94: Image data = value as Image; 95: 96: using (MemoryStream stream = new MemoryStream()) 97: { 98: data.Save(stream, this.ImageFormat); 99: value = stream.ToArray(); 100: } 101: } 102: 103: (cmd.Parameters[index] as DbParameter).Value = value ?? DBNull.Value; 104: } 105: 106: public Object Replace(Object original, Object target, Object owner) 107: { 108: return (original); 109: } 110: 111: public Type ReturnedType 112: { 113: get 114: { 115: return (typeof(Image)); 116: } 117: } 118: 119: public SqlType[] SqlTypes 120: { 121: get 122: { 123: return (new SqlType[] { new SqlType(DbType.Binary) }); 124: } 125: } 126: } In this case, we need to cache the original Byte[] data because it’s not easy to compare two Image instances, unless, of course, they are the same.

    Read the article

  • How to layout class definition when inheriting from multiple interfaces

    - by gabr
    Given two interface definitions ... IOmniWorkItem = interface ['{3CE2762F-B7A3-4490-BF22-2109C042EAD1}'] function GetData: TOmniValue; function GetResult: TOmniValue; function GetUniqueID: int64; procedure SetResult(const value: TOmniValue); // procedure Cancel; function DetachException: Exception; function FatalException: Exception; function IsCanceled: boolean; function IsExceptional: boolean; property Data: TOmniValue read GetData; property Result: TOmniValue read GetResult write SetResult; property UniqueID: int64 read GetUniqueID; end; IOmniWorkItemEx = interface ['{3B48D012-CF1C-4B47-A4A0-3072A9067A3E}'] function GetOnWorkItemDone: TOmniWorkItemDoneDelegate; function GetOnWorkItemDone_Asy: TOmniWorkItemDoneDelegate; procedure SetOnWorkItemDone(const Value: TOmniWorkItemDoneDelegate); procedure SetOnWorkItemDone_Asy(const Value: TOmniWorkItemDoneDelegate); // property OnWorkItemDone: TOmniWorkItemDoneDelegate read GetOnWorkItemDone write SetOnWorkItemDone; property OnWorkItemDone_Asy: TOmniWorkItemDoneDelegate read GetOnWorkItemDone_Asy write SetOnWorkItemDone_Asy; end; ... what are your ideas of laying out class declaration that inherits from both of them? My current idea (but I don't know if I'm happy with it): TOmniWorkItem = class(TInterfacedObject, IOmniWorkItem, IOmniWorkItemEx) strict private FData : TOmniValue; FOnWorkItemDone : TOmniWorkItemDoneDelegate; FOnWorkItemDone_Asy: TOmniWorkItemDoneDelegate; FResult : TOmniValue; FUniqueID : int64; strict protected procedure FreeException; protected //IOmniWorkItem function GetData: TOmniValue; function GetResult: TOmniValue; function GetUniqueID: int64; procedure SetResult(const value: TOmniValue); protected //IOmniWorkItemEx function GetOnWorkItemDone: TOmniWorkItemDoneDelegate; function GetOnWorkItemDone_Asy: TOmniWorkItemDoneDelegate; procedure SetOnWorkItemDone(const Value: TOmniWorkItemDoneDelegate); procedure SetOnWorkItemDone_Asy(const Value: TOmniWorkItemDoneDelegate); public constructor Create(const data: TOmniValue; uniqueID: int64); destructor Destroy; override; public //IOmniWorkItem procedure Cancel; function DetachException: Exception; function FatalException: Exception; function IsCanceled: boolean; function IsExceptional: boolean; property Data: TOmniValue read GetData; property Result: TOmniValue read GetResult write SetResult; property UniqueID: int64 read GetUniqueID; public //IOmniWorkItemEx property OnWorkItemDone: TOmniWorkItemDoneDelegate read GetOnWorkItemDone write SetOnWorkItemDone; property OnWorkItemDone_Asy: TOmniWorkItemDoneDelegate read GetOnWorkItemDone_Asy write SetOnWorkItemDone_Asy; end; As noted in answers, composition is a good approach for this example but I'm not sure it applies in all cases. Sometimes I'm using multiple inheritance just to split read and write access to some property into public (typically read-only) and private (typically write-only) part. Does composition still apply here? I'm not really sure as I would have to move the property in question out from the main class and I'm not sure that's the correct way to do it. Example: // public part of the interface interface IOmniWorkItemConfig = interface function OnExecute(const aTask: TOmniBackgroundWorkerDelegate): IOmniWorkItemConfig; function OnRequestDone(const aTask: TOmniWorkItemDoneDelegate): IOmniWorkItemConfig; function OnRequestDone_Asy(const aTask: TOmniWorkItemDoneDelegate): IOmniWorkItemConfig; end; // private part of the interface IOmniWorkItemConfigEx = interface ['{42CEC5CB-404F-4868-AE81-6A13AD7E3C6B}'] function GetOnExecute: TOmniBackgroundWorkerDelegate; function GetOnRequestDone: TOmniWorkItemDoneDelegate; function GetOnRequestDone_Asy: TOmniWorkItemDoneDelegate; end; // implementing class TOmniWorkItemConfig = class(TInterfacedObject, IOmniWorkItemConfig, IOmniWorkItemConfigEx) strict private FOnExecute : TOmniBackgroundWorkerDelegate; FOnRequestDone : TOmniWorkItemDoneDelegate; FOnRequestDone_Asy: TOmniWorkItemDoneDelegate; public constructor Create(defaults: IOmniWorkItemConfig = nil); public //IOmniWorkItemConfig function OnExecute(const aTask: TOmniBackgroundWorkerDelegate): IOmniWorkItemConfig; function OnRequestDone(const aTask: TOmniWorkItemDoneDelegate): IOmniWorkItemConfig; function OnRequestDone_Asy(const aTask: TOmniWorkItemDoneDelegate): IOmniWorkItemConfig; public //IOmniWorkItemConfigEx function GetOnExecute: TOmniBackgroundWorkerDelegate; function GetOnRequestDone: TOmniWorkItemDoneDelegate; function GetOnRequestDone_Asy: TOmniWorkItemDoneDelegate; end;

    Read the article

  • Taking a look at the Mindscape Phone Elements for WP7.

    - by mbcrump
    I recently heard that Mindscape HQ had released the Windows Phone 7 Controls and had to take a look at them. 100 FREE LICENSE GIVEAWAY! Before we get to the screenshots, you will be pleased to learn that my usergroup called “Allaboutxaml” has partnered with Mindscape HQ and are giving away 100 license. You can check out the site here to get your free controls. But please hurry as after the 100 are gone then I will not have any more to give away! A few links to read first: The official blog post from Mindscape HQ detailing the release. They also have the links to download the trial and get started. The phone elements official forum! So, let’s get started. After you download the controls go ahead and double click the .exe to get started installing them. After everything is installed then you will have the following program group. I’d recommend clicking on the Phone Elements Directory to get started: Let’s go over each element: Bin – Just the .DLL that’s required to use Mindscape HQ WP7 Controls in your project. Documentation – a .CHM File that will show you how to get your project up and running quickly. Resources – Just a few image files Samples – This is a full WP7 project that details every controls. The thing that I was most interested in was how the controls look and is it easy to use. I always believed if your paying for controls then you should hold my hand through using them. You will be pleased to know that Mindscape made it very easy to use. First, the WP7 project in the “Samples” folder just works. Double click on the solution file and you are in an emulator looking at the controls. Since you have the source code for every control, it’s a matter of copying/pasting the code in your project to get it to work. What I did, was play with the controls in the emulator until I found one I could use. Then I looked at the Visual Studio solution and found the Page that contained the control. Mindscape makes this very easy to do with their layout: So, the one that I was interested in was the Looping List Box.  Here is a demo of it: I wanted to see how they were populating the numbers 1-100 so I found the code behind and noticed it was just this one line. LoopingListBox1.DataSource = new NumericDataSource() { MinValue = 1, MaxValue = 100 }; In case you are wondering, the NumericDataSource was created by MindScape and you can view the Declaration to find out more about it:   So, the controls are pretty much that easy to use. Play with the emulator and find the control you want to use. Find the XAML file in the Sample Solution and copy/paste the code. Let’s go ahead and take a look at the controls available: They also have a great variety of Charting controls: Overall it’s a nice set of WP7 controls. Feel free to leave a comment below on anything you would like to see and I will make sure that Mindscape HQ get the message. Don’t forget if you are the first 100 people reading this article then you will get a free license.  Subscribe to my feed CodeProject

    Read the article

  • Why is 0 false?

    - by Morwenn
    This question may sound dumb, but why does 0 evaluates to false and any other [integer] value to true is most of programming languages? String comparison Since the question seems a little bit too simple, I will explain myself a little bit more: first of all, it may seem evident to any programmer, but why wouldn't there be a programming language - there may actually be, but not any I used - where 0 evaluates to true and all the other [integer] values to false? That one remark may seem random, but I have a few examples where it may have been a good idea. First of all, let's take the example of strings three-way comparison, I will take C's strcmp as example: any programmer trying C as his first language may be tempted to write the following code: if (strcmp(str1, str2)) { // Do something... } Since strcmp returns 0 which evaluates to false when the strings are equal, what the beginning programmer tried to do fails miserably and he generally does not understand why at first. Had 0 evaluated to true instead, this function could have been used in its most simple expression - the one above - when comparing for equality, and the proper checks for -1 and 1 would have been done only when needed. We would have considered the return type as bool (in our minds I mean) most of the time. Moreover, let's introduce a new type, sign, that just takes values -1, 0 and 1. That can be pretty handy. Imagine there is a spaceship operator in C++ and we want it for std::string (well, there already is the compare function, but spaceship operator is more fun). The declaration would currently be the following one: sign operator<=>(const std::string& lhs, const std::string& rhs); Had 0 been evaluated to true, the spaceship operator wouldn't even exist, and we could have declared operator== that way: sign operator==(const std::string& lhs, const std::string& rhs); This operator== would have handled three-way comparison at once, and could still be used to perform the following check while still being able to check which string is lexicographically superior to the other when needed: if (str1 == str2) { // Do something... } Old errors handling We now have exceptions, so this part only applies to the old languages where no such thing exist (C for example). If we look at C's standard library (and POSIX one too), we can see for sure that maaaaany functions return 0 when successful and any integer otherwise. I have sadly seen some people do this kind of things: #define TRUE 0 // ... if (some_function() == TRUE) { // Here, TRUE would mean success... // Do something } If we think about how we think in programming, we often have the following reasoning pattern: Do something Did it work? Yes -> That's ok, one case to handle No -> Why? Many cases to handle If we think about it again, it would have made sense to put the only neutral value, 0, to yes (and that's how C's functions work), while all the other values can be there to solve the many cases of the no. However, in all the programming languages I know (except maybe some experimental esotheric languages), that yes evaluates to false in an if condition, while all the no cases evaluate to true. There are many situations when "it works" represents one case while "it does not work" represents many probable causes. If we think about it that way, having 0 evaluate to true and the rest to false would have made much more sense. Conclusion My conclusion is essentially my original question: why did we design languages where 0 is false and the other values are true, taking in account my few examples above and maybe some more I did not think of? Follow-up: It's nice to see there are many answers with many ideas and as many possible reasons for it to be like that. I love how passionate you seem to be about it. I originaly asked this question out of boredom, but since you seem so passionate, I decided to go a little further and ask about the rationale behind the Boolean choice for 0 and 1 on Math.SE :)

    Read the article

  • MSBuild (.NET 4.0) access problems

    - by JMP
    I'm using Cruise Control .NET as my build server (Windows 2008 Server). Yesterday I upgraded my ASP.NET MVC project from VS 2008/.NET 3.5 to VS 2010/.NET 4.0. The only change I made to my ccnet.config's MSBuild task was the location of MSBuild.exe. Ever since I made that change, the build has been broken with the error: MSB4019 - The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk. This file does, in fact, exist in the location specified (I solved a problem similar to this when setting up the build server for VS2008/.NET 3.5 by copying the files from my dev environment to my build environment). So I RDP'ed into the build machine and opened a command prompt, used MSBUILD to attempt to build my project. MSBUILD returns the error: MSB3021 - Unable to copy file "obj\debug....dll". Access to the path 'bin....dll' is denied. Since I'm running MSBUILD from the command prompt, logged in with an account that has administrative privileges, I'm assuming that MSBUILD is running with the same privileges that I have. Next, I tried to copy the file that MSBUILD was attempting to copy. In this case, I get the UAC dialog that makes me click the [Continue] button to complete the copy. I'd like to avoid installing Visual Studio 2010 on my build machine, can anyone suggest other fixes I might try?

    Read the article

  • Google.com and clients1.google.com/generate_204

    - by David Murdoch
    I was looking into google.com's Net activity in firebug just because I was curious and noticed a request was returning "204 No Content." It turns out that a 204 No Content "is primarily intended to allow input for actions to take place without causing a change to the user agent's active document view, although any new or updated metainformation SHOULD be applied to the document currently in the user agent's active view." Whatever. I've looked into the JS source code and saw that "generate_204" is requested like this: (new Image).src="http://clients1.google.com/generate_204" No variable declaration/assignment at all. My first idea is that it was being used to track if Javascript is enabled. But the "(new Image).src='...'" call is called from a dynamically loaded external JS file anyway, so that would be pointless. Anyone have any ideas as to what the point could be? UPDATE "/generate_204" appears to be available on many google services/servers (e.g., maps.google.com/generate_204, maps.gstatic.com/generate_204, etc...). You can take advantage of this by pre-fetching the generate_204 pages for each google-owned service your web app may use. Like This: window.onload = function(){ var two_o_fours = [ // google maps domain ... "http://maps.google.com/generate_204", // google maps images domains ... "http://mt0.google.com/generate_204", "http://mt1.google.com/generate_204", "http://mt2.google.com/generate_204", "http://mt3.google.com/generate_204", // you can add your own 204 page for your subdomains too! "http://sub.domain.com/generate_204" ]; for(var i = 0, l = two_o_fours.length; i < l; ++i){ (new Image).src = two_o_fours[i]; } };

    Read the article

  • Building a VS2010 solution from TFS2008

    - by slugster
    I have a TFS 2008 Build Agent that has been used to build .Net 3.5 applications. I now have a .Net 4.0 app which i want to compile on the same build agent. I have ensured that MSBuild 4.0 is installed on there and all the required componentry is also installed, but i am getting the following MSB4062 error when building: [Any CPU/Release] C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets(244,5): error MSB4062: The "Microsoft.WebApplication.Build.Tasks.GetSilverlightItemsFromProperty" task could not be loaded from the assembly C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.Build.Tasks.dll. Could not load file or assembly 'file:///C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.Build.Tasks.dll' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. Confirm that the declaration is correct, and that the assembly and all its dependencies are available. I am presuming that i get this because the TFSBuild.proj gets executed by MSBuild 3.5 which in turn means my solution is compiled with MSBuild 3.5. Am i correct with my diagnosis? Is there any way to ensure that TFS2008 uses MSBuild 4.0 for my solution? Can it be done on a single team project so that it doesn't affect any other team projects being built on the same build agent? Note that i have checked the question Build failing - VS2010 solution on TFS2008 and this is not a duplicate. Thanks :)

    Read the article

  • Exception from HRESULT: 0x80020009 (DISP_E_EXCEPTION)) in SharePoint

    - by BeraCim
    Hi all: After googling for many hours for a solution for the above Sharepoint exception, I have come to SO for help on this one... I believe the cause of me getting the above exception is because of the following code: try { using (SPSite site = new SPSite(siteId, spUserToken)) { using (SPWeb web = site.OpenWeb(webId)) { createNewSite(web); } } } createNewSite(web) changes the name and URL of "web" using AllowUnsafeUpdates, so when it comes out of the method it has been changed. My few months worth of Sharepoint developing experience suggest that that is the cause of the exception. "web" is no longer used anymore so I can comfortably null it myself. The problem here is... it didnt work: try { using (SPSite site = new SPSite(siteId, spUserToken)) { SPWeb web = null; using (web = site.OpenWeb(webId)) { createNewSite(web); if (web != null) { web = null; } } } } I believe that the original developer used the using declaration to avoid SPWeb objects from leaking. Asides that I think it is okay for me to break this pattern solely for the purpose of getting rid of that dreaded exception. So the question: what can I do to the above code to potentially fix this exception? Thanks.

    Read the article

  • Installing into the GAC with WiX 3.0

    - by Jeff Yates
    I have a DLL that I would like to install into the Global Assembly Cache so that it can be referenced from multiple locations. I have a File declaration with the Assembly attribute set to ".net" but when the installation tries to install the DLL into the GAC, I get the following error (I have tided it up a bit to make it more readable): MSI (s) (58:38) [19:14:31:031]: Product: MyProductName 1.01 -- Error 1935. An error occurred during the installation of assembly  'Compass,   version="1.0.0.0",   culture="neutral",   publicKeyToken="392B26B760D48103",   processorArchitecture="MSIL"'. Please refer to Help and Support for more information. HRESULT: 0x80131043. assembly interface:       IAssemblyCacheItem, function:             Commit, component: {53AEE63B-F356-4D4F-8D61-EB0640A6E160} I have hunted around to find out what this means and the error relates to FUSION_E_UNEXPECTED_MODULE_FOUND. This link also includes this information: /// When installing multi-file assemblies into the GAC, the hash of each module is /// checked against the hash of that file stored in the manifest. If the /// hash of one of the files in the multi-file assembly does not match what is recorded /// in the manifest, FUSION_E_UNEXPECTED_MODULE_FOUND will be returned. /// The name of the error, and the text description of it, are somewhat confusing. /// The reason this error code is described this way is that the internally, /// Fusion/CLR implements installation of assemblies in the GAC, by installing /// multiple "streams" that are individually committed. /// Each stream has its hash computed, and all the hashes found /// are compared against the hashes in the manifest, at the end of the installation. /// Hence, a file hash mismatch appears as if an "unexpected" module was found. Unfortunately, this doesn't make much sense to me and I don't see how it relates to my assembly, which isn't fancy or complex from my perspective (it's just a regular .NET 3.5 class library and the current installation test is occurring on my development machine, which is a valid target environment for my project - 32-bit Windows XP SP3). Can anyone shed some light on why I might be getting this error and how I might hope to fix it?

    Read the article

  • Android "single top" launch mode and onNewIntent method

    - by Rich
    I read in the Android documentation that by setting my Activity's launchMode property to singleTop OR by adding the FLAG_ACTIVITY_SINGLE_TOP flag to my Intent, that calling startActivity(intent) would reuse a single Activity instance and give me the Intent in the onNewIntent callback. I did both of these things, and onNewIntent never fires and onCreate fires every time. The docs also say that this.getIntent() returns the intent that was first passed to the Activity when it was first created. In onCreate I'm calling getIntent and I'm getting a new one every time (I'm creating the intent object in another activity and adding an extra to it...this extra should be the same every time if it was returning me the same intent object). All this leads me to believe that my activity is not acting like a "single top", and I don't understand why. To add some background in case I'm simply missing a required step, here's my Activity declaration in the manifest and the code I'm using to launch the activity. The Activity itself doesn't do anything worth mentioning in regards to this: in AndroidManifest.xml: <activity android:name=".ArtistActivity" android:label="Artist" android:launchMode="singleTop"> </activity> in my calling Activity: Intent i = new Intent(); i.putExtra(EXTRA_KEY_ARTIST, id); i.setClass(this, ArtistActivity.class); i.addFlags(Intent.FLAG_ACTIVITY_SINGLE_TOP); startActivity(i);

    Read the article

  • The null value cannot be assigned to a member with type System.Int64 which is a non-nullable value t

    - by BritishDeveloper
    I'm getting the following error in my MVC2 app using Linq to SQL (I am new to both). I am connected to an actual SQL server not weird mdf: System.InvalidOperationException The null value cannot be assigned to a member with type System.Int64 which is a non-nullable value type My SQL table has a column called MessageID. It is BigInt type and has a primary key, NOT NULL and an IDENTITY 1 1, no Default In my dbml designer it has the following declaration for this field: [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_MessageId", AutoSync=AutoSync.OnInsert, DbType="BigInt NOT NULL IDENTITY", IsPrimaryKey=true, IsDbGenerated=true)] public long MessageId { get { return this._MessageId; } set { if ((this._MessageId != value)) { this.OnMessageIdChanging(value); this.SendPropertyChanging(); this._MessageId = value; this.SendPropertyChanged("MessageId"); this.OnMessageIdChanged(); } } } It keeps telling me that null cannot be assigned - I'm not passing through null! It's a long - it can't even be null! Am I doing something stupid? I can't find a solution anywhere! I made this work by changing the type of this property to Nullable<long> but surely this can't be right? Update: I am using InsertOnSubmit. Simplified code: public ActionResult Create(Message message) { if (ModelState.IsValid) { var db = new MessagingDataContext(); db.Messages.InsertOnSubmit(message); db.SubmitChanges(); //line 93 (where it breaks) } } breaks on SubmitChanges() with the error at the top of this question. Update2: Stack trace: at Read_Object(ObjectMaterializer`1 ) at System.Data.Linq.SqlClient.ObjectReaderCompiler.ObjectReader`2.MoveNext() at System.Linq.Enumerable.FirstOrDefault[TSource](IEnumerable`1 source) at System.Data.Linq.ChangeDirector.StandardChangeDirector.DynamicInsert(TrackedObject item) at System.Data.Linq.ChangeDirector.StandardChangeDirector.Insert(TrackedObject item) at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at Qanda.Controllers.MessagingController.Ask(Message message) in C:\Qanda\Qanda\Controllers\MessagingController.cs:line 93 Update3: No one knows and I don't have enough clout to offer a bounty! So continued on my ASP.NET blog. Please help!

    Read the article

  • How to make Delphi Prism indexed properties visible to C# when properties are not default

    - by Arcturus
    I have several Delphi Prism classes with indexed properties that I use a lot on my C# web applications (we are migrating a big Delphi Win32 system to ASP.Net). My problem is that it seems that C# can't see the indexed properties if they aren't the default properties of their classes. Maybe I'm doing something wrong, but I'm completely lost. I know that this question looks a lot like a bug report, but I need to know if someone else knows how to solve this before I report a bug. If I have a class like this: TMyClass = public class private ... method get_IndexedBool(index: Integer): boolean; method set_IndexedBool(index: Integer; value: boolean); public property IndexedBool[index: Integer]: boolean read get_IndexedBool write set_IndexedBool; default; // make IndexedBool the default property end; I can use this class in C# like this: var myObj = new TMyClass(); myObj[0] = true; However, if TMyClass is defined like this: TMyClass = public class private ... method get_IndexedBool(index: Integer): boolean; method set_IndexedBool(index: Integer; value: boolean); public property IndexedBool[index: Integer]: boolean read get_IndexedBool write set_IndexedBool; // IndexedBool is not the default property anymore end; Then the IndexedBool property becomes invisible in C#. The only way I can use it is doing this: var myObj = new TMyClass(); myObj.set_IndexedBool(0, true); I don't know if I'm missing something, but I can't see the IndexedBool property if I remove the default in the property declaration. Besides that, I'm pretty sure that it is wrong to have direct access to a private method of a class instance. Any ideas?

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >