Search Results

Search found 3230 results on 130 pages for 'david yu'.

Page 119/130 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Dynamic XAP loading in Task-It - Part 1

    Download Source Code NOTE 1: The source code provided is running against the RC versions of Silverlight 4 and VisualStudio 2010, so you will need to update to those bits to run it. NOTE 2: After downloading the source, be sure to set the .Web project as the StartUp Project, and Default.aspx as the Start Page In my MEF into post, MEF to the rescue in Task-It, I outlined a couple of issues I was facing and explained why I chose MEF (the Managed Extensibility Framework) to solve these issues. Other posts to check out There are a few other resources out there around dynamic XAP loading that you may want to review (by the way, Glenn Block is the main dude when it comes to MEF): Glenn Blocks 3-part series on a dynamically loaded dashboard Glenn and John Papas Silverlight TV video on dynamic xap loading These provide some great info, but didnt exactly cover the scenario I wanted to achieve in Task-Itand that is dynamically loading each of the apps pages the first time the user enters a page. The code In the code I provided for download above, I created a simple solution that shows the technique I used for dynamic XAP loading in Task-It, but without all of the other code that surrounds it. Taking all that other stuff away should make it easier to grasp. Having said that, there is still a fair amount of code involved. I am always looking for ways to make things simpler, and to achieve the desired result with as little code as possible, so if I find a better/simpler way I will blog about it, but for now this technique works for me. When I created this solution I started by creating a new Silverlight Navigation Application called DynamicXAP Loading. I then added the following line to my UriMappings in MainPage.xaml: <uriMapper:UriMapping Uri="/{assemblyName};component/{path}" MappedUri="/{assemblyName};component/{path}"/> In the section of MainPage.xaml that produces the page links in the upper right, I kept the Home link, but added a couple of new ones (page1 and page 2). These are the pages that will be dynamically (lazy) loaded: <StackPanel x:Name="LinksStackPanel" Style="{StaticResource LinksStackPanelStyle}">      <HyperlinkButton Style="{StaticResource LinkStyle}" NavigateUri="/Home" TargetName="ContentFrame" Content="home"/>      <Rectangle Style="{StaticResource DividerStyle}"/>      <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 1" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage1}"/>      <Rectangle Style="{StaticResource DividerStyle}"/>      <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 2" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage2}"/>  </StackPanel> In App.xaml.cs I added a bit of MEF code. In Application_Startup I call a method called InitializeContainer, which creates a PackageCatalog (a MEF thing), then I create a CompositionContainer and pass it to the CompositionHost.Initialize method. This is boiler-plate MEF stuff that allows you to do 'composition' and import 'packages'. You're welcome to do a bit more MEF research on what is happening here if you'd like, but for the purpose of this example you can just trust that it works. :-) private void Application_Startup(object sender, StartupEventArgs e) {     InitializeContainer();     this.RootVisual = new MainPage(); }   private static void InitializeContainer() {     var catalog = new PackageCatalog();     catalog.AddPackage(Package.Current);     var container = new CompositionContainer(catalog);     container.ComposeExportedValue(catalog);     CompositionHost.Initialize(container); } Infrastructure In the sample code you'll notice that there is a project in the solution called DynamicXAPLoading.Infrastructure. This is simply a Silverlight Class Library project that I created just to move stuff I considered application 'infrastructure' code into a separate place, rather than cluttering the main Silverlight project (DynamicXapLoading). I did this same thing in Task-It, as the amount of this type of code was starting to clutter up the Silverlight project, and it just seemed to make sense to move things like Enums, Constants and the like off to a separate place. In the DynamicXapLoading.Infrastructure project you'll see 3 classes: Enums - There is only one enum in here called ModuleEnum. We'll use these later. PageMetadata - We will use this class later to add metadata to a new dynamically loaded project. ViewModelBase - This is simply a base class for view models that we will use in this, as well as future samples. As mentioned in my MVVM post, I will be using the MVVM pattern throughout my code for reasons detailed in the post. By the way, the ViewModelExtension class in there allows me to do strongly-typed property changed notification, so rather than OnPropertyChanged("MyProperty"), I can do this.OnPropertyChanged(p => p.MyProperty). It's just a less error-prown approach, because if you don't spell "MyProperty" correctly using the first method, nothing will break, it just won't work. Adding a new page We currently have a couple of pages that are being dynamically (lazy) loaded, but now let's add a third page. 1. First, create a new Silverlight Application project: In this example I call it Page3. In the future you may prefer to use a different name, like DynamicXAPLoading.Page3, or even DynamicXAPLoading.Modules.Page3. It can be whatever you want. In my Task-It application I used the latter approach (with 'Modules' in the name). I do think of these application as 'modules', but Prism uses the same term, so some folks may not like that. Use whichever naming convention you feel is appropriate, but for now Page3 will do. When you change the name to Page3 and click OK, you will be presented with the Add New Project dialog: It is important that you leave the 'Host the Silverlight application in a new or existing Web site in the solution' checked, and the .Web project will be selected in the dropdown below. This will create the .xap file for this project under ClientBin in the .Web project, which is where we want it. 2. Uncheck the 'Add a test page that references the application' checkbox, and leave everything else as is. 3. Once the project is created, you can delete App.xaml and MainPage.xaml. 4. You will need to add references your new project to the following: DynamicXAPLoading.Infrastructure.dll (this is a Project reference) DynamicNavigation.dll (this is in the Libs directory under the DynamicXAPLoading project) System.ComponentModel.Composition.dll System.ComponentModel.Composition.Initialization.dll System.Windows.Controls.Navigation.dll If you have installed the latest RC bits you will find the last 3 dll's under the .NET tab in the Add Referenced dialog. They live in the following location, or if you are on a 64-bit machine like me, it will be Program Files (x86).       C:\Program Files\Microsoft SDKs\Silverlight\v4.0\Libraries\Client Now let's create some UI for our new project. 5. First, create a new Silverlight User Control called Page3.dyn.xaml 6. Paste the following code into the xaml: <dyn:DynamicPageShim xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:dyn="clr-namespace:DynamicNavigation;assembly=DynamicNavigation"     xmlns:my="clr-namespace:Page3;assembly=Page3">     <my:Page3Host /> </dyn:DynamicPageShim> This is just a 'shim', part of David Poll's technique for dynamic loading. 7. Expand the icon next to Page3.dyn.xaml and delete the code-behind file (Page3.dyn.xaml.cs). 8. Next we will create a control that will 'host' our page. Create another Silverlight User Control called Page3Host.xaml and paste in the following XAML: <dyn:DynamicPage x:Class="Page3.Page3Host"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:dyn="clr-namespace:DynamicNavigation;assembly=DynamicNavigation"     xmlns:Views="clr-namespace:Page3.Views"      mc:Ignorable="d"     d:DesignHeight="300" d:DesignWidth="400"     Title="Page 3">       <Views:Page3/>   </dyn:DynamicPage> 9. Now paste the following code into the code-behind for this control: using DynamicXAPLoading.Infrastructure;   namespace Page3 {     [PageMetadata(NavigateUri = "/Page3;component/Page3.dyn.xaml", Module = Enums.Page3)]     public partial class Page3Host     {         public Page3Host()         {             InitializeComponent();         }     } } Notice that we are now using that PageMetadata custom attribute class that we created in the Infrastructure project, and setting its two properties. NavigateUri - This tells it that the assembly is called Page3 (with a slash beforehand), and the page we want to load is Page3.dyn.xaml...our 'shim'. That line we added to the UriMapper in MainPage.xaml will use this information to load the page. Module - This goes back to that ModuleEnum class in our Infrastructure project. However, setting the Module to ModuleEnum.Page3 will cause a compilation error, so... 10. Go back to that Enums.cs under the Infrastructure project and add a 3rd entry for Page3: public enum ModuleEnum {     Page1,     Page2,     Page3 } 11. Now right-click on the Page3 project and add a folder called Views. 12. Right-click on the Views folder and create a new Silverlight User Control called Page3.xaml. We won't bother creating a view model for this User Control as I did in the Page 1 and Page 2 projects, just for the sake of simplicity. Feel free to add one if you'd like though, and copy the code from one of those other projects. Right now those view models aren't really doing anything anyway...though they will in my next post. :-) 13. Now let's replace the xaml for Page3.xaml with the following: <dyn:DynamicPage x:Class="Page3.Views.Page3"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:dyn="clr-namespace:DynamicNavigation;assembly=DynamicNavigation"     mc:Ignorable="d"     d:DesignHeight="300" d:DesignWidth="400"     Style="{StaticResource PageStyle}">       <Grid x:Name="LayoutRoot">         <ScrollViewer x:Name="PageScrollViewer" Style="{StaticResource PageScrollViewerStyle}">             <StackPanel x:Name="ContentStackPanel">                 <TextBlock x:Name="HeaderText" Style="{StaticResource HeaderTextStyle}" Text="Page 3"/>                 <TextBlock x:Name="ContentText" Style="{StaticResource ContentTextStyle}" Text="Page 3 content"/>             </StackPanel>         </ScrollViewer>     </Grid>   </dyn:DynamicPage> 14. And in the code-behind remove the inheritance from UserControl, so it should look like this: namespace Page3.Views {     public partial class Page3     {         public Page3()         {             InitializeComponent();         }     } } One thing you may have noticed is that the base class for the last two User Controls we created is DynamicPage. Once again, we are using the infrastructure that David Poll created. 15. OK, a few last things. We need a link on our main page so that we can access our new page. In MainPage.xaml let's update our links to look like this: <StackPanel x:Name="LinksStackPanel" Style="{StaticResource LinksStackPanelStyle}">     <HyperlinkButton Style="{StaticResource LinkStyle}" NavigateUri="/Home" TargetName="ContentFrame" Content="home"/>     <Rectangle Style="{StaticResource DividerStyle}"/>     <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 1" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage1}"/>     <Rectangle Style="{StaticResource DividerStyle}"/>     <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 2" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage2}"/>     <Rectangle Style="{StaticResource DividerStyle}"/>     <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 3" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage3}"/> </StackPanel> 16. Next, we need to add the following at the bottom of MainPageViewModel in the ViewModels directory of our DynamicXAPLoading project: public ModuleEnum ModulePage3 {     get { return ModuleEnum.Page3; } } 17. And at last, we need to add a case for our new page to the switch statement in MainPageViewModel: switch (module) {     case ModuleEnum.Page1:         DownloadPackage("Page1.xap");         break;     case ModuleEnum.Page2:         DownloadPackage("Page2.xap");         break;     case ModuleEnum.Page3:         DownloadPackage("Page3.xap");         break;     default:         break; } Now fire up the application and click the page 1, page 2 and page 3 links. What you'll notice is that there is a 2-second delay the first time you hit each page. That is because I added the following line to the Navigate method in MainPageViewModel: Thread.Sleep(2000); // Simulate a 2 second initial loading delay The reason I put this in there is that I wanted to simulate a delay the first time the page loads (as the .xap is being downloaded from the server). You'll notice that after the first hit to the page though that there is no delay...that's because the .xap has already been downloaded. Feel free to comment out this 2-second delay, or remove it if you'd like. I just wanted to show how subsequent hits to the page would be quicker than the initial one. By the way, you may want to display some sort of BusyIndicator while the .xap is loading. I have that in my Task-It appplication, but for the sake of simplicity I did not include it here. In the future I'll blog about how I show and hide the BusyIndicator using events (I'm currently using the eventing framework in Prism for that, but may move to the one in the MVVM Light Toolkit some time soon). Whew, that felt like a lot of steps, but it does work quite nicely. As I mentioned earlier, I'll try to find ways to simplify the code (I'd like to get away from having things like hard-coded .xap file names) and will blog about it in the future if I find a better way. In my next post, I'll talk more about what is actually happening with the code that makes this all work.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • how to bind the image dynamically for datagrid in.cs

    - by prince23
    hi, this is my xaml code. <sdk:DataGrid x:Name="dgMarks" CanUserResizeColumns="False" SelectionMode="Single" AutoGenerateColumns="False" VerticalAlignment="Top" IsReadOnly="True" Margin="13,44,0,0" RowDetailsVisibilityChanged="dgMarks_RowDetailsVisibilityChanged" RowDetailsVisibilityMode="Collapsed" Height="391" HorizontalAlignment="Left" Width="965" SelectionChanged="dgMarks_SelectionChanged" VerticalScrollBarVisibility="Visible" > <sdk:DataGrid.Columns> <sdk:DataGridTemplateColumn> <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Button x:Name="myButton" Click="ExpandMarks_Click"> <TextBlock Text="{Binding Level}" TextWrapping="NoWrap" ></TextBlock> <Image x:Name="imgMarks" Stretch="None"/> </Button> </DataTemplate> </sdk:DataGridTemplateColumn.CellTemplate> </sdk:DataGridTemplateColumn> <sdk:DataGridTemplateColumn Header="Name" Visibility="Collapsed"> <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate > <sdk:Label Content="{Binding Name}"/> </DataTemplate> </sdk:DataGridTemplateColumn.CellTemplate> </sdk:DataGridTemplateColumn> <sdk:DataGridTemplateColumn Header="Marks" Width="80"> <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate> <sdk:Label Content="{Binding Marks}"/> </DataTemplate> </sdk:DataGridTemplateColumn.CellTemplate> </sdk:DataGridTemplateColumn> </sdk:DataGrid.Columns> </sdk:DataGrid> from database i am getting these values name marks Level abc 23 0 xyz 67 1 yu 56 0 aa 89 1 here i am binding these values for datagrid. i have an tricky thing to be done .based on the level i should be binding image if level value is 1 then bind the image. if level value is 0 then do not bind the image for that row i know this is how we need to handle but where should i write this code in which events? Image imgLevel = (Image)templateTrendScore.FindName("imgMarks"); if (level1==1) { imgLevel .Source = new BitmapImage(new Uri("/Images/image1.JPG", UriKind.Relative)); } any help would be great thanks in advance

    Read the article

  • Make exact mp4 (H264) format for uploading to youtube

    - by WHITECOLOR
    With ffmpeg I'm converting video from mp3 and picture to upload it to youtube. After upload, conversion fails. Reasons are unknown. I believe the problem is in format. By the way If I'm uploading file 5 minutes length, it fails if I upload 30 seconds of this file it succeeds. I have donwload mp4 file from youtube. Then I uploaded it, it is done very fast. So a nice solution would be to convert videos to the same format that is done by google. I got the following output by mpeg: ffmpeg version N-44264-g070b0e1 Copyright (c) 2000-2012 the FFmpeg developers built on Sep 7 2012 17:38:57 with gcc 4.7.1 (GCC) configuration: --enable-gpl --enable-version3 --disable-pthreads --enable-runt ime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libass - -enable-libcelt --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-l ibfreetype --enable-libgsm --enable-libmp3lame --enable-libnut --enable-libopenj peg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheo ra --enable-libutvideo --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-li bvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --ena ble-zlib libavutil 51. 72.100 / 51. 72.100 libavcodec 54. 55.100 / 54. 55.100 libavformat 54. 25.105 / 54. 25.105 libavdevice 54. 2.100 / 54. 2.100 libavfilter 3. 16.100 / 3. 16.100 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 15.100 / 0. 15.100 libpostproc 52. 0.100 / 52. 0.100 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'youtubetrack0.mp4': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2012-10-02 22:58:57 Duration: 00:06:46.66, start: 0.000000, bitrate: 176 kb/s Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yu v420p, 450x360, 78 kb/s, 6 fps, 6 tbr, 12 tbn, 12 tbc Metadata: creation_time : 1970-01-01 00:00:00 handler_name : VideoHandler Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, s16, 95 kb/s Metadata: creation_time : 2012-10-02 22:58:57 handler_name : IsoMedia File Produced by Google, 5-11-2011 Is it possible to construct ffmpeg parameters so that that would give the same format that google internally does? Is the information above sufficient? I couldn't construct needed params. For example I don't understand how to set tbn and what 95 kb/s mean in "Stream #0:1(und): Audio:". Now I just do: ffmpeg -i videoimage.jpg -i audio.mp3 video.mp4 Info I've got: ffmpeg version N-44998-gdf82454 Copyright (c) 2000-2012 the FFmpeg developers built on Oct 2 2012 23:03:12 with gcc 4.7.1 (GCC) configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --disable-pthreads --enable-runtime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libass --enable-libcelt --enable-libopencore-amrnb --en able-libopencore-amrwb --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libnut --enable-libopenjpeg --enable-librtmp --enable-libschroedinger - -enable-libspeex --enable-libtheora --enable-libutvideo --enable-libvo-aacenc -- enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enab le-libxavs --enable-libxvid --enable-zlib libavutil 51. 73.101 / 51. 73.101 libavcodec 54. 63.100 / 54. 63.100 libavformat 54. 29.105 / 54. 29.105 libavdevice 54. 3.100 / 54. 3.100 libavfilter 3. 19.102 / 3. 19.102 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 16.100 / 0. 16.100 libpostproc 52. 1.100 / 52. 1.100 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 encoder : Lavf54.25.105 Duration: 00:06:46.81, start: 0.000000, bitrate: 129 kb/s Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuvj420p, 450x360, 3392 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc Metadata: handler_name : VideoHandler Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, s16, 127 kb/s Metadata: handler_name : SoundHandler This video fails the conversion on youtube. I also tried to use other vcode parmam and extensions of output file (mp4, wmv, avi) but failed too. Would be greatful for help.

    Read the article

  • Syntax error on token "QUOTE", VariableDeclaratorId expected after this token

    - by user356812
    I've posted a bigger chunk of the code below. You can see that initially QUOTE was procedural- coded in place. I'm trying to learn how to use declarative design so I want to do the same thing but by using resources. It seems like I need to access the string.xml thru the @R.id tag and identify QUOTE with that string value. But I don't know enough to negotiate this. Any tips? Thanks! public class circle extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(new GraphicsView(this)); } static public class GraphicsView extends View { //private static final String QUOTE = "Happy Birthday to David."; private final String QUOTE = getString(R.string.quote); ..... @Override protected void onDraw(Canvas canvas) { // Drawing commands go here canvas.drawPath(circle, cPaint); canvas.drawTextOnPath(QUOTE, circle, 0, 20, tPaint);

    Read the article

  • C# :Emgu CV creating image problem

    - by Meko
    Hi all. When I am trying to create image like Image<Gray, Byte> testImage = new Image<Gray, Byte>("david.jpg"); When compiling it gaves An unhandled exception of type 'System.ArgumentException' occurred in System.Drawing.dllexception. But if I use DialogResult result = openFileDialog1.ShowDialog(); if (result == DialogResult.OK || result == DialogResult.Yes) { textBox1.Text = openFileDialog1.FileName; } Image<Gray, Byte> testImage = new Image<Gray, Byte>( textBox1.Text); It works.Problem is that it cant find path? I am adding all .jpg files in project folder.

    Read the article

  • paypal API in VB.net

    - by StealthRT
    Hey all, i have converted some C# PayPal API Code over to VB.net. I have added that code to a class within my project but i can not seem to access it: Imports System Imports com.paypal.sdk.services Imports com.paypal.sdk.profiles Imports com.paypal.sdk.util Namespace GenerateCodeNVP Public Class GetTransactionDetails Public Sub New() End Sub Public Function GetTransactionDetailsCode(ByVal transactionID As String) As String Dim caller As New NVPCallerServices() Dim profile As IAPIProfile = ProfileFactory.createSignatureAPIProfile() profile.APIUsername = "xxx" profile.APIPassword = "xxx" profile.APISignature = "xxx" profile.Environment = "sandbox" caller.APIProfile = profile Dim encoder As New NVPCodec() encoder("VERSION") = "51.0" encoder("METHOD") = "GetTransactionDetails" encoder("TRANSACTIONID") = transactionID Dim pStrrequestforNvp As String = encoder.Encode() Dim pStresponsenvp As String = caller.[Call](pStrrequestforNvp) Dim decoder As New NVPCodec() decoder.Decode(pStresponsenvp) Return decoder("ACK") End Function End Class End Namespace I am using this to access that class: Private Sub cmdGetTransDetail_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmdGetTransDetail.Click Dim thereturn As String thereturn =GetTransactionDetailsCode("test51322") End Sub But it keeps telling me: Error 2 Name 'GetTransactionDetailsCode' is not declared. I'm new at calling classes in VB.net so any help would be great! :o) David

    Read the article

  • Moving from WCF RIA Beta to RC: best practices?

    - by Duncan Bayne
    I have an existing WCF RIA project built on the Release Candidate; I'm now moving to the Release version & have discovered many changes. David Scruggs made the following comment on his (MSDN) blog: "If you’ve written anything in SIlverlight 4 RIA Services, you’ll need to rewrite it. There has been a lot of refactoring and namespace moves." Having made a brief attempt to compile the old solution with the new RIA framework I'm inclined to agree. My current plan is to: remove the Silverlight Business Application projects from the Solution rebuild the EF4 items from the database create a new Silverlight Business Application project re-add the files (XAML, CS) from the old Silverlight Business Application project Does this sound like a reasonable approach? I think it's cleaner than trying to manually alter the existing project.

    Read the article

  • Best non-development book for software developers

    - by Dima Malenko
    What is the best non software development related book that you think each software developer should read? Note, there is a similar, poll-style question here: What non-programming books should programmers read? Update: Peopleware is a great book, must read, no doubt. But it is about software development so does not count. Update: We ended up suggesting more than one book and that's great! Below is summary (with links to Amazon) of the books you should consider for your reading list. The Design of Everyday Things by Donald Norman Getting Things Done by David Allen Godel, Escher, Bach by Douglas R. Hofstadter The Goal and It's Not Luck by Eliyahu M. Goldratt Here Comes Everybody by Clay Shirky ...to be continued.

    Read the article

  • How do I stub a view in rspec-2

    - by Trey Bean
    I'm in the process of upgrading an app to Rails 3/Rspec 2. I see that stubbing a view helper method has changed in Rspec 2. It looks like instead of doing template.stub!, we're now supposed to do view.stub!, but I can't seem to get this to work on beta 10. I get an "undefined local variable or method `view' for # < RSpec::Core::ExampleGroup::Nested_1::Nested_1::Nested_1:0x106785fd0" error. I see that in this commit David removed the view method, but I can't figure out what it was replaced with. Something in ActionView::TestCase::Behavior? I'm on rails 3.0.0.beta3. Any idea what I'm missing?

    Read the article

  • Host new WF4 workflows in appfabric

    - by racingcow
    Hello, I am new to using AppFabric to host WF services. I am trying to write a workflow admin application that will allow users to create xaml workflow definitions using the hosted WF4 designer, and then somehow allow those workflow defitions to be automatically deployed and hosted in AppFabric with the click of a button. I have the designer going, and I have read a couple of tutorials on how to host workflow services in AppFabric such as http://msdn.microsoft.com/en-us/library/ee677238.aspx, but my problem is how to deploy and host the workflow services via code. Does anyone know if this sort of "autodeploy/host" thing can be done with AppFabric? If so, could you point me in the right direction on this? -David

    Read the article

  • Hibernate unknown entity (not missing @Entity or import javax.persistence.Entity )

    - by david99world
    I've got a really simple class... import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table; @Entity @Table(name = "users") public class User { @Column(name = "firstName") private String firstName; @Column(name = "lastName") private String lastName; @Column(name = "email") private String email; @Id @GeneratedValue(strategy=GenerationType.AUTO) @Column(name = "id") private long id; public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public long getId() { return id; } public void setId(long id) { this.id = id; } } I call it using... public class Main { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub HibernateUtil.buildSessionFactory(); Session session = HibernateUtil.getSessionFactory().getCurrentSession(); session.beginTransaction(); User u = new User(); u.setEmail("[email protected]"); u.setFirstName("David"); u.setLastName("Gray"); session.save(u); session.getTransaction().commit(); System.out.println("Record committed"); session.close(); } } I keep getting... Exception in thread "main" org.hibernate.MappingException: Unknown entity: org.assessme.com.entity.User at org.hibernate.internal.SessionFactoryImpl.getEntityPersister(SessionFactoryImpl.java:1172) at org.hibernate.internal.SessionImpl.getEntityPersister(SessionImpl.java:1316) at org.hibernate.event.internal.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:117) at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.saveWithGeneratedOrRequestedId(DefaultSaveOrUpdateEventListener.java:204) at org.hibernate.event.internal.DefaultSaveEventListener.saveWithGeneratedOrRequestedId(DefaultSaveEventListener.java:55) at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.entityIsTransient(DefaultSaveOrUpdateEventListener.java:189) at org.hibernate.event.internal.DefaultSaveEventListener.performSaveOrUpdate(DefaultSaveEventListener.java:49) at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.onSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:90) at org.hibernate.internal.SessionImpl.fireSave(SessionImpl.java:670) at org.hibernate.internal.SessionImpl.save(SessionImpl.java:662) at org.hibernate.internal.SessionImpl.save(SessionImpl.java:658) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.hibernate.context.internal.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:352) at $Proxy4.save(Unknown Source) at Main.main(Main.java:20) hibernateUtil is... import org.hibernate.SessionFactory; import org.hibernate.cfg.Configuration; import org.hibernate.service.ServiceRegistry; import org.hibernate.service.ServiceRegistryBuilder; public class HibernateUtil { private static SessionFactory sessionFactory; private static ServiceRegistry serviceRegistry; public static SessionFactory buildSessionFactory() { try { // Create the SessionFactory from hibernate.cfg.xml Configuration configuration = new Configuration(); configuration.configure(); serviceRegistry = new ServiceRegistryBuilder().applySettings(configuration.getProperties()).buildServiceRegistry(); return new Configuration().configure().buildSessionFactory(serviceRegistry); } catch (Throwable ex) { // Make sure you log the exception, as it might be swallowed System.err.println("Initial SessionFactory creation failed." + ex); throw new ExceptionInInitializerError(ex); } } public static SessionFactory getSessionFactory() { sessionFactory = new Configuration().configure().buildSessionFactory(serviceRegistry); return sessionFactory; } } does anyone have any ideas as I've looked at so many duplicates but the resolutions don't appear to work for me. hibernate.cfg.xml shown below... <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <!-- Database connection settings --> <property name="connection.driver_class">com.mysql.jdbc.Driver</property> <property name="connection.url">jdbc:mysql://localhost/ssme</property> <property name="connection.username">root</property> <property name="connection.password">mypassword</property> <!-- JDBC connection pool (use the built-in) --> <property name="connection.pool_size">1</property> <!-- SQL dialect --> <property name="dialect">org.hibernate.dialect.MySQLDialect</property> <!-- Enable Hibernate's automatic session context management --> <property name="current_session_context_class">thread</property> <!-- Disable the second-level cache --> <property name="cache.provider_class">org.hibernate.cache.NoCacheProvider</property> <!-- Echo all executed SQL to stdout --> <property name="show_sql">true</property> <!-- Drop and re-create the database schema on startup --> <property name="hbm2ddl.auto">update</property> </session-factory> </hibernate-configuration>

    Read the article

  • mySQL date range problem

    - by StealthRT
    Hey all i am in need of a little help with figuring out how to get a range of days for my select query. Here is the code i am trying out: select id, idNumber, theDateStart, theDateEnd from clients WHERE idNumber = '010203' AND theDateStart >= '2010-04-09' AND theDateEnd <= '2010-04-09'; This is what the data in the table looks like: TheDateStart = 2010-04-09 TheDateEnd = 2010-04-11 When testing that code above, it does not populate anything. If i take out the TheEndDate, it populates but with some other tables data as well which it should not do (it should only get one record). I know the problem is within the two date's. I'm not sure how to go about getting a date range for theDateStart and theDateEnd since if someone tries it, say, on 2010-04-10, it still needs to know its within rage of the 2010-04-09 - 2010-04-11. But right now, it does not... Any help would be great! :o) David

    Read the article

  • javascript date and loop check

    - by StealthRT
    Hey all, this is the code i have to check for a day thats equal to the list of days in the comma seperated list: for(var i = 0; i < daysHidden.length; i++){ if (daysHidden[i] == d.getDate()); { alert(daysHidden[i] + '=' + d.getDate()); } } the daysHidden = 1 (its the only thing in the list April 1st is already gone and todays the 2nd so 1 is the only one in the list) and d.getDate() has 1-30 (for april) When i run the code, however, it keeps looping through the if code when it should only loop once (when it finds that 1=1 However, i keep getting the alert box that says: 1=1 1=2 1=3 etc.... 1=30 So i do not know what i am doing incorrect? I already tried putting them as strings: if (daysHidden[i].ToString == d.getDate().ToString); But that doesnt seem to work.... Any help would be great :) David

    Read the article

  • Level of SVG SMIL (animation) support among the browsers

    - by user246114
    Hi, Does anyone know the current state of SVG SMIL animation support in the popular browsers? It looks like Safari, Chrome, and Opera support it. Firefox has confusing reports in their dev pages about SMIL support having been added, but I don't see it as of v3.6: https://bugzilla.mozilla.org/show_bug.cgi?id=216462 I am ignoring IE since they don't even support SVG at all, and probably never will, much less SMIL. The other thing - just comparing this test page between Safari, Chrome, and Opera: http://srufaculty.sru.edu/david.dailey/svg/ovaling.svg looks like Opera is the only one that renders it correctly. Should we not be using SMIL - kind of looks half-baked in all the browsers (sadly)? Blast. Thanks

    Read the article

  • Ugly thing and advantage of anonymos method -C#

    - by nettguy
    I was asked to explain the ugly thing and advantages of anonymous method. I explained possibly Ugly thing anonymous methods turning quickly into spaghetti code. Advantages We can produce thread safe code using anonymous method :Example static List<string> Names = new List<string>( new string[] { "Jon Skeet", "Marc Gravell", "David", "Bill Gates" }); static List<string> FindNamesStartingWith(string startingText) { return Names.FindAll( delegate(string name) { return name.StartsWith(startingText); }); } But really i did not know whether it is thread safe or not.I was asked to justify it. Can any one help me to understand (1) advantages of anonymous methods (2) Is the above code thread safe or not?

    Read the article

  • create an english test

    - by thienthai
    hi everyone, i want to create an english test software using window form and C# something like bellow: Hi David, Thanks for your e-mail. I hope things get easier for you before the weekend. You’ve been (be) really busy this week! 1 _______ (you / make) your vacation plans yet? Last May, I 2 _________(go) to Japan with my family again. We 3 _______ (be) there three times now! But this time, we 4 _______ (not stay) with my aunt in Tokyo. Instead, we 5 ______(drive) around to different places. Then in July, my friend Angie and I 6 ________ (travel) to Peru. 7 _______ (you / ever / be) there? It’s one of the most interesting places I 8 _______ (ever / visit). The ruins of Machu Picchu are amazing. Write soon! Mariko how can i display it in window form that we can fill the brackets (____) directly everyone help me thanks.

    Read the article

  • Monotouch Binding to Linea Pro SDK

    - by jeffrapp
    I'm trying to create a binding to the Linea Pro (it's the barcode scanner they use in the Apple Stores, Lowes) SDK. I'm using David Sandor's bindings as a reference, but the SDK has been updated a few times since January of 2011. I have most everything working, except for the playSound call, which is used to, well, play a sound on the Linea Pro device. The .h file from the SDK has the call as follows: -(BOOL)playSound:(int)volume beepData:(int *)data length:(int)length error:(NSError **)error; I've tried using int[], NSArray, and an IntPtr to the int[], but nothing seems to work. The last unsuccessful iteration of my binding looks like: [Export ("playSound:beepData:length:")] void PlaySound (int volume, NSArray data, int length); Now, this doesn't work at all. Also note that I have no idea what to do with the error:(NSError **)error part, either. I am lacking some serious familiarity with C, so any help would be extremely appreciated.

    Read the article

  • SQL to get distinct statistics

    - by Sung Kim
    Hi, Suppose I have data in table X: id assign team ---------------------- 1 hunkim A 1 ygg A 2 hun B 2 gw B 2 david B 3 haha A I want to know how many assigns for each id. I can get using: select id, count(distinct assign) from X group by id order by count(distinct assign)desc; It will give me something: 1 2 2 3 3 1 My question is how can I get the average of the all assign counts? In addition, now I want to know the everage per team. So I want to get something like: team assign_avg ------------------- A 1.5 B 3 Thanks in advance!

    Read the article

  • question about frequency of updating access

    - by I__
    i have a table in an access database this access database is used on a regular basis, basically from 9-5 someone else has a copy of this exact table. sometimes records are added, sometimes deleted, and sometimes data within the records is updated. i need to update the access database table with the offsite table every hour or so. what is the best algorithm of updating the data? there are about 5000 records. would it severely lock up the table for a few seconds every hour? i would like to publicly apologize for my rude comment to david fenton

    Read the article

  • vb.net array for slideshow button

    - by StealthRT
    Hey all, i am in the middle of trying to figure out how to go about doing this... I have a form that i call like so: call frmSlideShow.showWhat(1,4,8,9,11,22) Each of the numbers represents a different image slide (slide1.png, slide4.png, etc..). The problem i am having is trying to create a "previous" and "next" button to flip through them all. Trying to figure out what number the user is on and going from there and seeing what numbers are still left from the list above that was sent, etc. If anyone has an idea how i would go about doing that then that would be awesome! :) David

    Read the article

  • Popup extender "frozen" on code-behind exception.

    - by davandries
    Hi, In a C#/ASP.NET project, we're using an ajax modalpopupextender to display a "Processing..." message to the users. We're displaying it using a Javascript call in the code of the ASP.NET page. Then, in the code behind, we're doing some database operation, and hide again the popup using "popup.hide();" The problem is that when an exception occurs in the code behind, the popup is still displayed and the application does not handle errors as per the "customErrors" tag of the web.config. Any idea on how to deal with this kind of issues? Thanks, David

    Read the article

  • How to assign optional attribute of NSManagedObject in a NSManagedObjectModel having Three NSManaged

    - by Kundan
    I am using coredata framework. In my NSManagedObjectModel i am using three entities that are Class, Student and Score where class and student have one-to-many & inverse relationship and Student and Score have also inverse but one-one relationship. Score entity has all optional attributes and having default '0' decimalVaue, which is not assigned at the time new Student is added. But later i want to assign them score individually and also to particular attribute not all of score attributes in a go. I am able to create and add Students to particular Class but dont have any idea how to call particular student and assign them score. For example I want to assign Score'attribute "dribbling" [there are many attributes like "tackling"] a decimal value to Student "David" of Class "Soccer" ,how i can do that? Thanks in advance for any suggestion.

    Read the article

  • Compilation error on a user control

    - by MikeP
    I'm stumped! We have a user control for managing account information. We use this particular control on two pages. On one page, everything works perfectly and meets our expectation. On the second page however we receive compilation errors stating that: "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\lrpcentral\0e987bea\6719c8b6\App_Web_PageThatFails.aspx.f3d462c1.oi52bvii.0.cs(172): error CS0433: The type 'xxxx_ascx' exists in both 'c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\APPLICATIONNAME\0e987bea\6719c8b6\App_Web_xxxx.ascx.cdcab7d2.xbnvt2za.dll' and 'c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\APPLICATIONNAME\0e987bea\6719c8b6\App_Web_eix7xllr.dll' My problem is similar to Cyril's but the "delete everything from Temp" is not an option for me, and Cyril's solution does not apply since the only variable we have is contained in the designer file, which is not deployed to our production environment (we pre-compile). After reading David's answer (here) I examined my directories for circular dependency and was unable to find any. Structure: Top Level Page that works Control Directory A Page that causes the error

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >