Search Results

Search found 6492 results on 260 pages for 'trigger io'.

Page 78/260 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • WHERE x = @x OR @x IS NULL

    - by steveh99999
    Every SQL DBA and developer should read the blog of MVP Erland Sommarskog – but particularly  his article on dynamic search conditions in T-SQL. I’ve linked above to his SQL 2005 article but his 2008 version is also a must-read. I seem to regularly come across uses of the SQL in the title above… Erland’s article explains in detail why this is inefficient, but I came across a nice example recently… A stored procedure contained the following code :- WHERE @Name is null or [Name] like @Name as a nonclustered index exists on the Name column, you might assume this would be handled efficiently by SQL Server. However, I got the following output from SET STATISTICS IO Table 'xxxxx'. Scan count 15, logical reads 47760, physical reads 9, read-ahead reads 13872, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Note the high number of logical reads… After a bit of investigation, we found that @Name could never actually be set to NULL in this particular example. ie the @x IS NULL was spurious… So, we changed the call to WHERE  [Name] like @Name Now, how much more efficient is this code ? Table 'xxxxx'. Scan count 3, logical reads 24, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0 A nice easy win in this case…… a full index scan has been replaced by a significantly more efficient index seek. I managed to recreate the same behaviour on Adventureworks – here’s a quick query to demonstrate :- USE adventureworks SET STATISTICS IO ON DECLARE @id INT = 51721 SELECT * FROM Sales.SalesOrderDetail WHERE @id IS NULL OR salesorderid = @id SELECT * FROM Sales.SalesOrderDetail WHERE salesorderid = @id Take a look at the STATISTICS IO output and compare the actual query plans used to prove the impact of  WHERE @id IS NULL. And just to follow some of Erland’s advice – here’s how you could get similar performance if it was possible that @id could actually sometimes contain NULL. DECLARE @sql NVARCHAR(4000), @parameterlist NVARCHAR(4000) DECLARE @id INT = 51721 – or change to NULL to prove query is functionally correct SET @sql = 'SELECT * FROM Sales.SalesOrderDetail WHERE 1 = 1' IF @id IS NOT NULL SET @sql = @sql + ' AND salesorderid = @id' IF @id IS NULL SET @sql = @sql + ' AND salesorderid IS NULL' SET @parameterlist = '@id INT' EXEC sp_executesql @sql, @parameterlist,@id Sometimes I think we focus too much on hardware and SQL Server configuration – when really the answer is focus on writing efficient SQL.

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part IV)

    So finally we get to the fun part the fruits of all of our middle-tier/back end labors of generating classes to interface with an XML data source that the previous posts were about can now be presented quickly and easily to an end user.  I think.  Well see.  Well be using a WPF window to display all of our various MFL information that weve collected in the two XML files, and well provide a means of adding, updating and deleting each of these entities using as little code as possible.  Additionally, I would like to dig into the performance of this solution as well as the flexibility of it if were were to modify the underlying XML schema.  So first things first, lets create a WPF project and include our xml data in a data folder within.  On the main window, well drag out the following controls: A combo box to contain all of the teams A list box to show the players of the selected team, along with add/delete player buttons A text box tied to the selected players name, with a save button to save any changes made to the player name A combo box of all the available positions, tied to the currently selected players position A data grid tied to the statistics of the currently selected player, with add/delete statistic buttons This monstrosity of a form and its associated project will look like this (dont forget to reference the DataFoundation project from the Presentation project): To get to the visual data binding, as we learned in a previous post, you have to first make sure the project containing your bindable classes is compiled.  Do so, and then open the Data Sources pane to add a reference to the Teams and Positions classes in the DataFoundation project: Why only Team and Position?  Well, we will get to Players from Teams, and Statistics from Players so no need to make an interface for them as well see in a second.  As for Positions, well need a way to bind the dropdown to ALL positions they dont appear underneath any of the other classes so we need to reference it directly.  After adding these guys, expand every node in your Data Sources pane and see how the Team node allows you to drill into Players and then Statistics.  This is why there was no need to bring in a reference to those classes for the UI we are designing: Now for the seriously hard work of binding all of our controls to the correct data sources.  Drag the following items from the Data Sources pane to the specified control on the window design canvas: Team.Name > Teams combo box Team.Players.Name > Players list box Team.Players.Name > Player name text box Team.Players.Statistics > Statistics data grid Position.Name > Positions combo box That is it!  Really?  Well, no, not really there is one caveat here in that the Positions combo box is not bound the selected players position.  To do so, we will apply a binding to the position combo boxs SelectedValue to point to the current players PositionId value: That should do the trick now, all we need to worry about is loading the actual data.  Sadly, it appears as if we will need to drop to code in order to invoke our IO methods to load all teams and positions.  At least Visual Studio kindly created the stubs for us to do so, ultimately the code should look like this: Note the weirdness with the InitializeDataFiles call that is my current means of telling an IO where to load the data for each of the entities.  I havent thought of a more intuitive way than that yet, but do note that all data is loaded from Teams.xml besides for positions, which is loaded from Lookups.xml.   I think that may be all we need to do to at least load all of the data, lets run it and see: Yay!  All of our glorious data is being displayed!  Er, wait, whats up with the position dropdown?  Why is it red?  Lets select the RB and see if everything updates: Crap, the position didnt update to reflect the selected player, but everything else did.  Where did we go wrong in binding the position to the selected player?  Thinking about it a bit and comparing it to how traditional data binding works, I realize that we never set the value member (or some similar property) to tell the control to join the Id of the source (positions) to the position Id of the player.  I dont see a similar property to that on the combo box control, but I do see a property named SelectedValuePath that might be it, so I set it to Id and run the app again: Hey, all right!  No red box around the positions combo box.  Unfortunately, selecting the RB does not update the dropdown to point to Runningback.  Hmmm.  Now what could it be?  Maybe the problem is that we are loading teams before we are loading positions, so when it binds position Id, all of the positions arent loaded yet.  I went to the code behind and switched things so position loads first and no dice.  Same result when I run.  Why?  WHY?  Ok, ok, calm down, take a deep breath.  Get something with caffeine or sugar (preferably both) and think rationally. Ok, gigantic chocolate chip cookie and a mountain dew chaser have never let me down in the past, so dont fail me now!  Ah ha!  of course!  I didnt even have to finish the mountain dew and I think Ive got it:  Data Context.  By default, when setting on the selected value binding for the dropdown, the data context was list_team.  I dont even know what the heck list_team is, we want it to be bound to our team players view source resource instead, like this: Running it now and selecting the various players: Done and done.  Everything read and bound, thank you caffeine and sugar!  Oh, and thank you Visual Studio 2010.  Lets wire up some of those buttons now There has got to be a better way to do this, but it works for now.  What the add player button does is add a new player object to the currently selected team.  Unfortunately, I couldnt get the new object to automatically show up in the players list (something about not using an observable collection gotta look into this) so I just save the change immediately and reload the screen.  Terrible, but it works: Lets go after something easier:  The save button.  By default, as we type in new text for the players name, it is showing up in the list box as updated.  Cool!  Why couldnt my add new player logic do that?  Anyway, the save button should be as simple as invoking MFL.IO.Save for the selected player, like this: MFL.IO.Save((MFL.Player)lbTeamPlayers.SelectedItem, true); Surprisingly, that worked on the first try.  Lets see if we get as lucky with the Delete player button: MFL.IO.Delete((MFL.Player)lbTeamPlayers.SelectedItem); Refresh(); Note the use of the Refresh method again I cant seem to figure out why updates to the underlying data source are immediately reflected, but adds and deletes are not.  That is a problem for another day, and again my hunch is that I should be binding to something more complex than IEnumerable (like observable collection). Now that an example of the basic CRUD methods are wired up, I want to quickly investigate the performance of this beast.  Im going to make a special button to add 30 teams, each with 50 players and 10 seasons worth of stats.  If my math is right, that will end up with 15000 rows of data, a pretty hefty amount for an XML file.  The save of all this new data took a little over a minute, but that is acceptable because we wouldnt typically be saving batches of 15k records, and the resulting XML file size is a little over a megabyte.  Not huge, but big enough to see some read performance numbers or so I thought.  It reads this file and renders the first team in under a second.  That is unbelievable, but we are lazy loading and the file really wasnt that big.  I will increase it to 50 teams with 100 players and 20 seasons each - 100,000 rows.  It took a year and a half to save all of that data, and resulted in an 8 megabyte file.  Seriously, if you are loading XML files this large, get a freaking database!  Despite this, it STILL takes under a second to load and render the first team, which is interesting mostly because I thought that it was loading that entire 8 MB XML file behind the scenes.  I have to say that I am quite impressed with the performance of the LINQ to XML approach, particularly since I took no efforts to optimize any of this code and was fairly new to the concept from the start.  There might be some merit to this little project after all Look out SQL Server and Oracle, use XML files instead!  Next up, I am going to completely pull the rug out from under the UI and change a number of entities in our model.  How well will the code be regenerated?  How much effort will be required to tie things back together in the UI?Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Alerts are good, aren't they?

    - by fatherjack
    It is accepted best practise to set some alerts on every SQL instance you install. They aren't particularly well publicised but I have never seen any one not recommend setting up alerts for Error 823, 824 and 825. These alerts are focussed on successful access(IO) to the hard drives that SQL Server is using. If there are  any errors when reading or writing to the drives then one of these errors will be returned. Having the alerts on these errors means that any IO issues will be brought to the...(read more)

    Read the article

  • VMWare Player pauses often

    - by pascal
    I'm using a 64bit Windows 8 inside vmplayer, with 2 virtual processor cores, virtual hard disk resides on a fast local disc and is not preallocated; host CPU is Intel i7 3770, should be capable of hardware virtualisation but I don't know if VMWare uses it; NAT networking; Sound card connected, USB connected, accelerated 3D graphics (NVidia 313.30 on host) My problem is, that the VM often pauses for a few seconds, and then speeds up for a few seconds to reach real time again. Time in the VM actually moves faster after the pause, for example all animations using timers speed up. When running, the vmware-vmx process shows ~150% CPU usage in top, but 0% when pausing (and D state i.e. waiting for IO). iotop shows normal disk writes from vmware-vmx threads, but during pauses, the flush kernel thread uses 99%. Are there some options to try so that VMWare doesn't wait for IO? I've tried a few things available from the GUI but the issue never went away…

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part III)

    The last post ended with us just getting started on stumbling into text template file customization, a task that required a Visual Studio extension (Tangible T4 Editor) to even have a chance at completing.  Despite the benefits of the Tangible T4 Editor, I still had a hard time putting together a solid text template that would be easy to explain.  This is mostly due to the way the files allow you to mix code (encapsulated in <# #>) with straight-up text to generate.  It is effective to be sure, but not very readable.  Nevertheless, I will try and explain what was accomplished in my custom tt file, though the details of which are not really the point of this article (my way of saying dont criticize my crappy code, and certainly dont use it in any somewhat real application.  You may become dumber just by looking at this code.  You have been warned really the footnote I should put at the end of all of my blog posts). To begin with, there were two basic requirements that I needed the code generator to satisfy:  Reading one to many entity framework files, and using the entities that were found to write one to many class files.  Thankfully, using the Entity Object Generator as a starting point gave us an example on how to do exactly that by using the MetadataLoader and EntityFrameworkTemplateFileManager you include references to these items and use them like so: // Instantiate an entity framework file reader and file writer MetadataLoader loader = new MetadataLoader(this); EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this); // Load the entity model metadata workspace MetadataWorkspace metadataWorkspace = null; bool allMetadataLoaded =loader.TryLoadAllMetadata("MFL.tt", out metadataWorkspace); EdmItemCollection ItemCollection = (EdmItemCollection)metadataWorkspace.GetItemCollection(DataSpace.CSpace); // Create an IO class to contain the 'get' methods for all entities in the model fileManager.StartNewFile("MFL.IO.gen.cs"); Next, we want to be able to loop through all of the entities found in the model, and then each property for each entity so we can generate classes and methods for each.  The code for that is blissfully simple: // Iterate through each entity in the model foreach (EntityType entity in ItemCollection.GetItems<EntityType>().OrderBy(e => e.Name)) {     // Iterate through each primitive property of the entity     foreach (EdmProperty edmProperty in entity.Properties.Where(p => p.TypeUsage.EdmType is PrimitiveType && p.DeclaringType == entity))     {         // TODO:  Create properties     }     // Iterate through each relationship of the entity     foreach (NavigationProperty navProperty in entity.NavigationProperties.Where(np => np.DeclaringType == entity))     {         // TODO:  Create associations     } } There really isnt anything more advanced than that going on in the text template the only thing I had to blunder through was realizing that if you want the generator to interpret a line of code (such as our iterations above), you need to enclose the code in <# and #> while if you want the generator to interpret the VALUE of code, such as putting the entity name into the class name, you need to enclose the code in <#= and #> like so: public partial class <#=entity.Name#> To make a long story short, I did a lot of repetition of the above to come up with a text template that generates a class for each entity based on its properties, and a set of IO methods for each entity based on its relationships.  The two work together to provide lazy-loading for hierarchical data (such getting Team.Players) so it should be pretty intuitive to use on a front-end.  This text template is available here you can tweak the inputFiles array to load one or many different edmx models and generate the basic xml IO and class files, though it will probably only work correctly in the simplest of cases, like our MFL model described in the previous post.  Additionally, there is no validation, logging or error handling which is something I want to handle later by stumbling through the enterprise library 5.0. The code that gets generated isnt anything special, though using the LINQ to XML feature was something very new and exciting for me I had only worked with XML in the past using the DOM or XML Reader objects along with XPath, and the LINQ to XML model is just so much more elegant and supposedly efficient (something to test later).  For example, the following code was generated to create a Player object for each Player node in the XML:         return from element in GetXmlData(_PlayerDataFile).Descendants("Player")             select new Player             {                 Id = int.Parse(element.Attribute("Id").Value)                 ,ParentName = element.Parent.Name.LocalName                 ,ParentId = long.Parse(element.Parent.Attribute("Id").Value)                 ,Name = element.Attribute("Name").Value                 ,PositionId = int.Parse(element.Attribute("PositionId").Value)             }; It is all done in one line of code, no looping needed.  Even though GetXmlData loads the entire xml file just like the old XML DOM approach would have, it is supposed to be much less resource intensive.  I will definitely put that to the test after we develop a user interface for getting at this data.  Speaking of the data where IS the data?  Weve put together a pretty model and a bunch of code around it, but we dont have any data to speak of.  We can certainly drop to our favorite XML editor and crank out some data, but if it doesnt totally match our model, it will not load correctly.  To help with this, Ive built in a method to generate xml at any given layer in the hierarchy.  So for us to get the closest possible thing to real data, wed need to invoke MFL.IO.GenerateTeamXML and save the results to file.  Doing so should get us something that looks like this: <Team Id="0" Name="0">   <Player Id="0" Name="0" PositionId="0">     <Statistic Id="0" PassYards="0" RushYards="0" Year="0" />   </Player> </Team> Sadly, it is missing the Positions node (havent thought of a way to generate lookup xml yet) and the data itself isnt quite realistic (well, as realistic as MFL data can be anyway).  Lets manually remedy that for now to give us a decent starter set of data.  Note that this is TWO xml files Lookups.xml and Teams.xml: <Lookups Id=0>   <Position Id="0" Name="Quarterback"/>   <Position Id="1" Name="Runningback"/> </Lookups> <Teams Id=0>   <Team Id="0" Name="Chicago">     <Player Id="0" Name="QB Bears" PositionId="0">       <Statistic Id="0" PassYards="4000" RushYards="120" Year="2008" />       <Statistic Id="1" PassYards="4200" RushYards="180" Year="2009" />     </Player>     <Player Id="1" Name="RB Bears" PositionId="1">       <Statistic Id="2" PassYards="0" RushYards="800" Year="2007" />       <Statistic Id="3" PassYards="0" RushYards="1200" Year="2008" />       <Statistic Id="4" PassYards="3" RushYards="1450" Year="2009" />     </Player>   </Team> </Teams> Ok, so we have some data, we have a way to read/write that data and we have a friendly way of representing that data.  Now, what remains is the part that I have been looking forward to the most: present the data to the user and give them the ability to add/update/delete, and doing so in a way that is very intuitive (easy) from a development standpoint.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part III)

    The last post ended with us just getting started on stumbling into text template file customization, a task that required a Visual Studio extension (Tangible T4 Editor) to even have a chance at completing.  Despite the benefits of the Tangible T4 Editor, I still had a hard time putting together a solid text template that would be easy to explain.  This is mostly due to the way the files allow you to mix code (encapsulated in <# #>) with straight-up text to generate.  It is effective to be sure, but not very readable.  Nevertheless, I will try and explain what was accomplished in my custom tt file, though the details of which are not really the point of this article (my way of saying dont criticize my crappy code, and certainly dont use it in any somewhat real application.  You may become dumber just by looking at this code.  You have been warned really the footnote I should put at the end of all of my blog posts). To begin with, there were two basic requirements that I needed the code generator to satisfy:  Reading one to many entity framework files, and using the entities that were found to write one to many class files.  Thankfully, using the Entity Object Generator as a starting point gave us an example on how to do exactly that by using the MetadataLoader and EntityFrameworkTemplateFileManager you include references to these items and use them like so: // Instantiate an entity framework file reader and file writer MetadataLoader loader = new MetadataLoader(this); EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this); // Load the entity model metadata workspace MetadataWorkspace metadataWorkspace = null; bool allMetadataLoaded =loader.TryLoadAllMetadata("MFL.tt", out metadataWorkspace); EdmItemCollection ItemCollection = (EdmItemCollection)metadataWorkspace.GetItemCollection(DataSpace.CSpace); // Create an IO class to contain the 'get' methods for all entities in the model fileManager.StartNewFile("MFL.IO.gen.cs"); Next, we want to be able to loop through all of the entities found in the model, and then each property for each entity so we can generate classes and methods for each.  The code for that is blissfully simple: // Iterate through each entity in the model foreach (EntityType entity in ItemCollection.GetItems<EntityType>().OrderBy(e => e.Name)) {     // Iterate through each primitive property of the entity     foreach (EdmProperty edmProperty in entity.Properties.Where(p => p.TypeUsage.EdmType is PrimitiveType && p.DeclaringType == entity))     {         // TODO:  Create properties     }     // Iterate through each relationship of the entity     foreach (NavigationProperty navProperty in entity.NavigationProperties.Where(np => np.DeclaringType == entity))     {         // TODO:  Create associations     } } There really isnt anything more advanced than that going on in the text template the only thing I had to blunder through was realizing that if you want the generator to interpret a line of code (such as our iterations above), you need to enclose the code in <# and #> while if you want the generator to interpret the VALUE of code, such as putting the entity name into the class name, you need to enclose the code in <#= and #> like so: public partial class <#=entity.Name#> To make a long story short, I did a lot of repetition of the above to come up with a text template that generates a class for each entity based on its properties, and a set of IO methods for each entity based on its relationships.  The two work together to provide lazy-loading for hierarchical data (such getting Team.Players) so it should be pretty intuitive to use on a front-end.  This text template is available here you can tweak the inputFiles array to load one or many different edmx models and generate the basic xml IO and class files, though it will probably only work correctly in the simplest of cases, like our MFL model described in the previous post.  Additionally, there is no validation, logging or error handling which is something I want to handle later by stumbling through the enterprise library 5.0. The code that gets generated isnt anything special, though using the LINQ to XML feature was something very new and exciting for me I had only worked with XML in the past using the DOM or XML Reader objects along with XPath, and the LINQ to XML model is just so much more elegant and supposedly efficient (something to test later).  For example, the following code was generated to create a Player object for each Player node in the XML:         return from element in GetXmlData(_PlayerDataFile).Descendants("Player")             select new Player             {                 Id = int.Parse(element.Attribute("Id").Value)                 ,ParentName = element.Parent.Name.LocalName                 ,ParentId = long.Parse(element.Parent.Attribute("Id").Value)                 ,Name = element.Attribute("Name").Value                 ,PositionId = int.Parse(element.Attribute("PositionId").Value)             }; It is all done in one line of code, no looping needed.  Even though GetXmlData loads the entire xml file just like the old XML DOM approach would have, it is supposed to be much less resource intensive.  I will definitely put that to the test after we develop a user interface for getting at this data.  Speaking of the data where IS the data?  Weve put together a pretty model and a bunch of code around it, but we dont have any data to speak of.  We can certainly drop to our favorite XML editor and crank out some data, but if it doesnt totally match our model, it will not load correctly.  To help with this, Ive built in a method to generate xml at any given layer in the hierarchy.  So for us to get the closest possible thing to real data, wed need to invoke MFL.IO.GenerateTeamXML and save the results to file.  Doing so should get us something that looks like this: <Team Id="0" Name="0">   <Player Id="0" Name="0" PositionId="0">     <Statistic Id="0" PassYards="0" RushYards="0" Year="0" />   </Player> </Team> Sadly, it is missing the Positions node (havent thought of a way to generate lookup xml yet) and the data itself isnt quite realistic (well, as realistic as MFL data can be anyway).  Lets manually remedy that for now to give us a decent starter set of data.  Note that this is TWO xml files Lookups.xml and Teams.xml: <Lookups Id=0>   <Position Id="0" Name="Quarterback"/>   <Position Id="1" Name="Runningback"/> </Lookups> <Teams Id=0>   <Team Id="0" Name="Chicago">     <Player Id="0" Name="QB Bears" PositionId="0">       <Statistic Id="0" PassYards="4000" RushYards="120" Year="2008" />       <Statistic Id="1" PassYards="4200" RushYards="180" Year="2009" />     </Player>     <Player Id="1" Name="RB Bears" PositionId="1">       <Statistic Id="2" PassYards="0" RushYards="800" Year="2007" />       <Statistic Id="3" PassYards="0" RushYards="1200" Year="2008" />       <Statistic Id="4" PassYards="3" RushYards="1450" Year="2009" />     </Player>   </Team> </Teams> Ok, so we have some data, we have a way to read/write that data and we have a friendly way of representing that data.  Now, what remains is the part that I have been looking forward to the most: present the data to the user and give them the ability to add/update/delete, and doing so in a way that is very intuitive (easy) from a development standpoint.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Creating a Build Definition using the TFS 2010 API

    - by Jakob Ehn
    In this post I will show how to create a new build definition in TFS 2010 using the TFS API. When creating a build definition manually, using Team Explorer, the necessary steps are lined out in the New Build Definition Wizard:     So, lets see how the code looks like, using the same order. To start off, we need to connect to TFS and get a reference to the IBuildServer object: TfsTeamProjectCollection server = newTfsTeamProjectCollection(newUri("http://<tfs>:<port>/tfs")); server.EnsureAuthenticated(); IBuildServer buildServer = (IBuildServer) server.GetService(typeof (IBuildServer)); General First we create a IBuildDefinition object for the team project and set a name and description for it: var buildDefinition = buildServer.CreateBuildDefinition(teamProject); buildDefinition.Name = "TestBuild"; buildDefinition.Description = "description here..."; Trigger Next up, we set the trigger type. For this one, we set it to individual which corresponds to the Continuous Integration - Build each check-in trigger option buildDefinition.ContinuousIntegrationType = ContinuousIntegrationType.Individual; Workspace For the workspace mappings, we create two mappings here, where one is a cloak. Note the user of $(SourceDir) variable, which is expanded by Team Build into the sources directory when running the build. buildDefinition.Workspace.AddMapping("$/Path/project.sln", "$(SourceDir)", WorkspaceMappingType.Map); buildDefinition.Workspace.AddMapping("$/OtherPath/", "", WorkspaceMappingType.Cloak); Build Defaults In the build defaults, we set the build controller and the drop location. To get a build controller, we can (for example) use the GetBuildController method to get an existing build controller by name: buildDefinition.BuildController = buildServer.GetBuildController(buildController); buildDefinition.DefaultDropLocation = @\\SERVER\Drop\TestBuild; Process So far, this wasy easy. Now we get to the tricky part. TFS 2010 Build is based on Windows Workflow 4.0. The build process is defined in a separate .XAML file called a Build Process Template. By default, every new team team project containtwo build process templates called DefaultTemplate and UpgradeTemplate. In this sample, we want to create a build definition using the default template. We use te QueryProcessTemplates method to get a reference to the default for the current team project   //Get default template var defaultTemplate = buildServer.QueryProcessTemplates(teamProject).Where(p => p.TemplateType == ProcessTemplateType.Default).First(); buildDefinition.Process = defaultTemplate;   There are several build process templates that can be set for the default build process template. Only one of these are required, the ProjectsToBuild parameters which contains the solution(s) and configuration(s) that should be built. To set this info, we use the ProcessParameters property of thhe IBuildDefinition interface. The format of this property is actually just a serialized dictionary (IDictionary<string, object>) that maps a key (parameter name) to a value which can be any kind of object. This is rather messy, but fortunately, there is a helper class called WorkflowHelpers inthe Microsoft.TeamFoundation.Build.Workflow namespace, that simplifies working with this persistence format a bit. The following code shows how to set the BuildSettings information for a build definition: //Set process parameters varprocess = WorkflowHelpers.DeserializeProcessParameters(buildDefinition.ProcessParameters); //Set BuildSettings properties BuildSettings settings = newBuildSettings(); settings.ProjectsToBuild = newStringList("$/pathToProject/project.sln"); settings.PlatformConfigurations = newPlatformConfigurationList(); settings.PlatformConfigurations.Add(newPlatformConfiguration("Any CPU", "Debug")); process.Add("BuildSettings", settings); buildDefinition.ProcessParameters = WorkflowHelpers.SerializeProcessParameters(process); The other build process parameters of a build definition can be set using the same approach   Retention  Policy This one is easy, we just clear the default settings and set our own: buildDefinition.RetentionPolicyList.Clear(); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Succeeded, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Failed, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Stopped, 1, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.PartiallySucceeded, 10, DeleteOptions.All); Save It! And we’re done, lets save the build definition: buildDefinition.Save(); That’s it!

    Read the article

  • Lync CMS replication is failing for all Domain Computers

    - by Ravi Kanneganti
    I have Lync Server 2010 and Active Directory installed on 2 different Windows Server 2008 R2 machines. I have added a Windows 7 PC to AD. And I have added this computer to Trusted Application Servers Pool and published the topology. I want to build an UCMA application to extend Lync Server functionality. I have installed UCMA 3.0 SDK in the same computer where Lync Server is residing. But, CMS Replication isn't happening and "Get-CsManagementStoreReplicationStatus" always gives Uptodate as "False" for my Windows 7 PC. I have even tried "Invoke-CSManagementStoreReplication" but nothing changed. Also, this is the error message that I can see in the log file: TL_WARN(TF_COMPONENT) [2]0500.07B8::04/05/2012-14:55:07.296.00000f85 (XDS_Replica_Replicator,FileDistributeTask.Execute:filedistributetask.cs(165))(000000000043B3FA)**Could not distribute the file. Exception: [System.IO.IOException: The process cannot access the file because it is being used by another process.** at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.File.Move(String sourceFileName, String destFileName) at Microsoft.Rtc.Xds.Replication.Replicator.Common.FileDistributeTask.Execute()]. TL_NOISE(TF_DIAG) [2]0500.07B8::04/05/2012-14:55:07.296.00000f86 (XDS_Replica_Replicator,ReplicaTaskContainer<T>.OnError:replicataskcontainer.cs(166))(00000000005C39D4)Enter. TL_INFO(TF_COMPONENT) [2]0500.07B8::04/05/2012-14:55:07.296.00000f87 (XDS_Replica_Replicator,ReplicaTaskContainer<T>.OnError:replicataskcontainer.cs(171))(00000000005C39D4)Task error callback is about to be called. TL_VERBOSE(TF_DIAG) [2]0500.07B8::04/05/2012-14:55:07.296.00000f88 (XDS_Replica_Replicator,PerReplicaTaskManager<T>.HandleTaskError:perreplicataskmanager.cs(230))(000000000385E79C)Enter. TL_INFO(TF_COMPONENT) [2]0500.07B8::04/05/2012-14:55:07.296.00000f89 (XDS_Replica_Replicator,PerReplicaTaskManager<T>.HandleTaskError:perreplicataskmanager.cs(234))(000000000385E79C)Task encountered an error: [ReplicaTaskContainer<FileDistributeTask>{FileDistributeTask{E:\RtcReplicaRoot\xds-replica\from-master\data.zip, E:\RtcReplicaRoot\xds-replica\working\replication\from-master\data.zip, **Access failed**. (E:\RtcReplicaRoot\xds-replica\from-master\data.zip)}, FileDistributeTask{E:\RtcReplicaRoot\xds-replica\from-master\data.zip, E:\RtcReplicaRoot\xds-replica\working\replication\from-master\data.zip, }}]

    Read the article

  • ISCSI Target Ubuntu

    - by erai
    I'm trying to setup iscsitarget on Ubuntu 12.04 but I can't connect to it. On the windows machine it says Target Error. with no other output. My ietd.conf is Target iqn.2012-06.com.org:virtual_machines.lun Lun 0 Type=fileio,Path=/media/volume0/storlun0.bin When I run iscsiadm -m discovery -t st -p localhost The output is iscsiadm: Connection to Discovery Address 127.0.0.1 failed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: Connection to Discovery Address 127.0.0.1 closed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: Connection to Discovery Address 127.0.0.1 failed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: Connection to Discovery Address 127.0.0.1 failed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: Connection to Discovery Address 127.0.0.1 failed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: connection login retries (reopen_max) 5 exceeded iscsiadm: Could not perform SendTargets discovery. dmesg output: [ 3324.804665] iscsi_trgt: Removing all connections, sessions and targets [ 3325.875343] iSCSI Enterprise Target Software - version 1.4.20.3 [ 3325.875415] iscsi_trgt: Registered io type fileio [ 3325.875420] iscsi_trgt: Registered io type blockio [ 3325.875425] iscsi_trgt: Registered io type nullio

    Read the article

  • Toshiba laptop cd drive read causes OS to totally freeze

    - by Fujishiro
    Okay I'll try to write an understandable summary. Forgive me if I'll fail with that attempt though. So. There is a Toshiba Satellite notebook. Got Windows 7 x86 Professional (OEM) installed on it, everything is fine (okay.. somewhat). The problem. If you put an audio or any kind of disc into the drive, something starts to eat the PC. Back then when the owner told me about this, he put an audio disc into the lappy. Winamp caused the IO load, 100%. Tried taskkill, taskkill /T, tried powershell, EVERYTHING. You just can NOT kill winamp or anything which becomes the blocker at that time. Even if you kill almost everything, laptop won't do a clear shutdown. Also I tried to use the force switch at 'shutdown' from cmd, but no use. (So: At these times you can use the laptop, but the blocker/explorer/disc becomes gray as a non-responding app. You can try to kill them, but that won't work, nor you can shutdown the machine). (Also tried using PID, but no use. For the highest IO I used the "select columns" from Task Manager and enabled the IO columns.) My first hunch was the problematic disc, autoplay and it tries to read tries to read (still shouldn't kill the PC). Disabled autoplay, removed winamp. Tried other software, etc. Everything was ok. Few days later the owner tried to put a disc into the machine and it started to reproduce the same symptoms but with a totally different disc. Uhm what to know. Virus is not an option, protected by BitDefender (valid license) and Spybot. Thanks if you have ANY idea about this strange problem. ps.: For now, the owner uses Daemon tools + Blindwrite as an alternative for those apps which wouldnt start without the disc.

    Read the article

  • Understanding RedHats recommended tuned profiles

    - by espenfjo
    We are going to roll out tuned (and numad) on ~1000 servers, the majority of them being VMware servers either on NetApp or 3Par storage. According to RedHats documentation we should choose the virtual-guestprofile. What it is doing can be seen here: tuned.conf We are changing the IO scheduler to NOOP as both VMware and the NetApp/3Par should do sufficient scheduling for us. However, after investigating a bit I am not sure why they are increasing vm.dirty_ratio and kernel.sched_min_granularity_ns. As far as I have understood increasing increasing vm.dirty_ratio to 40% will mean that for a server with 20GB ram, 8GB can be dirty at any given time unless vm.dirty_writeback_centisecsis hit first. And while flushing these 8GB all IO for the application will be blocked until the dirty pages are freed. Increasing the dirty_ratio would probably mean higher write performance at peaks as we now have a larger cache, but then again when the cache fills IO will be blocked for a considerably longer time (Several seconds). The other is why they are increasing the sched_min_granularity_ns. If I understand it correctly increasing this value will decrease the number of time slices per epoch(sched_latency_ns) meaning that running tasks will get more time to finish their work. I can understand this being a very good thing for applications with very few threads, but for eg. apache or other processes with a lot of threads would this not be counter-productive?

    Read the article

  • What's throttling the database?

    - by Troels Arvin
    Hardware: Intel x86_64 with 192GB of RAM. OS: CentOS 5.4 x86_64. DBMS: DB2 v. 9.7.1 64 bit. During certain special workloads (e.g. parallel REORGs/RUNSTATs), I've seen the server transporting 450MB/s with 25000IO/s (yes, there is probably some storage system caching happening here) while all CPU cores were happily working in an even mix of usermode/wait. And disk benchmark tools can also bring some very satisfying bandwith and IO/s numbers to the table. On the other hand, we also have another scenario: A single rather complex query with at least one large table scan. db2's "list applications" reports that the query is Executing (not locked). IO: At most 10MB/s, 500 IO/s; CPU: two cores in 99.9% wait state, all other cores 100% idle. The tables which the query reads from have been altered to have LOCKSIZE=TABLE, so I would think that lock list work is zero. What's going on in such a situation? What tools/snapshots/... can I use to gain better insight in such a case?

    Read the article

  • Glassfish v3 failure when startup. "Cannot allocate memory "

    - by Shisoft
    It is clear in this Question Fail to start Glassfish 3.1: java.io.IOException: error=12, Cannot allocate memory But in my case,I have a 512M memory Ubuntu 10.04 vps.It seems that I don't need to change any configure.But when start the server,I got this exception VM failed to start: java.io.IOException: Cannot run program "/usr/lib/jvm/java-6-sun-1.6.0.22/bin/java" (in directory "/home/glassfish/glassfish/domains/domain1/config"): java.io.IOException: error=12, Cannot allocate memory So,I set <jvm-options>-Xmx512</jvm-options> to <jvm-options>-Xmx400</jvm-options> The exception remains.What did I do something wrong? result of free -m total used free shared buffers cached Mem: 512 43 468 0 0 0 -/+ buffers/cache: 43 468 Swap: 0 0 0 result of cat /proc/user_beancounters Version: 2.5 uid resource held maxheld barrier limit failcnt 146049: kmemsize 2670652 5385253 51200000 51200000 0 lockedpages 0 8 2048 2048 0 privvmpages 11134 134522 131200 262200 4 shmpages 648 1352 128000 128000 0 dummy 0 0 0 0 0 numproc 12 73 500 500 0 physpages 6519 28162 0 200000000 0 vmguarpages 0 0 512000 512000 0 oomguarpages 6527 28169 512000 512000 0 numtcpsock 4 14 4096 4096 0 numflock 0 5 2048 2048 0 numpty 1 2 32 32 0 numsiginfo 0 3 1024 1024 0 tcpsndbuf 159600 265744 20480000 20480000 0 tcprcvbuf 65536 3590352 20480000 20480000 0 othersockbuf 44232 90640 20480000 20480000 0 dgramrcvbuf 0 12848 10240000 10240000 0 numothersock 22 31 2048 2048 0 dcachesize 0 0 10240000 10240000 0 numfile 1002 1474 50000 50000 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 24 24 2048 2048 0 Thanks

    Read the article

  • Ubuntu's garbage collection cron job for PHP sessions takes 25 minutes to run, why?

    - by Lamah
    Ubuntu has a cron job set up which looks for and deletes old PHP sessions: # Look for and purge old sessions every 30 minutes 09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] \ && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 \ -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir \ fuser -s {} 2> /dev/null \; -delete My problem is that this process is taking a very long time to run, with lots of disk IO. Here's my CPU usage graph: The cleanup running is represented by the teal spikes. At the beginning of the period, PHP's cleanup jobs were scheduled at the default 09 and 39 minutes times. At 15:00 I removed the 39 minute time from cron, so a cleanup job twice the size runs half as often (you can see the peaks get twice as wide and half as frequent). Here are the corresponding graphs for IO time: And disk operations: At the peak where there were about 14,000 sessions active, the cleanup can be seen to run for a full 25 minutes, apparently using 100% of one core of the CPU and what seems to be 100% of the disk IO for the entire period. Why is it so resource intensive? An ls of the session directory /var/lib/php5 takes just a fraction of a second. So why does it take a full 25 minutes to trim old sessions? Is there anything I can do to speed this up? The filesystem for this device is currently ext4, running on Ubuntu Precise 12.04 64-bit. EDIT: I suspect that the load is due to the unusual process "fuser" (since I expect a simple rm to be a damn sight faster than the performance I'm seeing). I'm going to remove the use of fuser and see what happens.

    Read the article

  • Screen recording in Windows 8 makes PC unusable?

    - by Skadier
    OS used: Windows 8 Pro x64 Hi there, I got a weird problem. I tried to record my screen using different screen recording applications like Camtasia7, Hypercam, and some others. So if I hit "record" everything works fine for about 3-4 seconds. Then the PC slows down heavily caused by extremly high IO usage. The PC gets nearly unusable and laggs like hell. I can't switch applications, can't get TaskManager with Ctrl+Alt+Del, or something else without waiting 1-3 Minutes. After about 5-8 Minutes and a bit of luck I am sometimes able to end the process of the recorder and IO calms down very slow(100% IO lasts for about 2 Minutes before becoming normal). I don't know why this happens. Screen recorders that support Windows 8 aren't out afaik but at least Camtasia7 worked with Windows 8 Developer Preview and Windows 8 Release Preview (only had to change record method to .avi). No slow downs or something else. Is there anyone out there who knows this problem too? Is there maybe a solution to this problem?

    Read the article

  • DataGridView row is still dirty after committing changes

    - by Ecyrb
    DataGridView.IsCurrentRowDirty remains true after I commit changes to the database. I want to set it to false so it doesn't trigger RowValidating when it loses focus. I have a DataGridView bound to a BindingList<T>. I handle the CellEndEdit event and save changes to the database. After saving those changes I would like DataGridView.IsCurrentRowDirty to be set to true, since all cells in the row are up-to-date; however, it's set to false. This causes problems for me because when the row does lose focus it will trigger RowValidating, which I handle and validate all three cells in. So even though all the cells are valid and none are dirty it will still validate them all. That's a waste. Here's an example of what I have: void dataGridView_CellValidating(object sender, DataGridViewCellValidatingEventArgs e) { // Ignore cell if it's not dirty if (dataGridView.isCurrentCellDirty) return; // Validate current cell. } void dataGridView_RowValidating(object sender, DataGridViewCellCancelEventArgs e) { // Ignore Row if it's not dirty if (!dataGridView.IsCurrentRowDirty) return; // Validate all cells in the current row. } void dataGridView_CellEndEdit(object sender, DataGridViewCellEventArgs e) { // Validate all cells in the current row and return if any are invalid. // If they are valid, save changes to the database // This is when I would expect dataGridView.IsCurrentRowDirty to be false. // When this row loses focus it will trigger RowValidating and validate all // cells in this row, which we already did above. } I've read posts that said I could call the form's Validate() method, but that will cause RowValidating to fire, which is what I'm trying to avoid. Any idea how I can set DataGridView.IsCurrentRowDirty to true? Or maybe a way to prevent RowValidating from unnecessarily validating all the cells?

    Read the article

  • Properly registering JavaScript and CSS in MVC 2 Editor Templates

    - by Jaxidian
    How do I properly register javascript blocks in an ASP.NET MVC 2 (RTM) Editor template? The specific scenario I'm in is that I want to use Dynarch JSCal2 DateTimePicker for my standard datetime picker, but this question is in general to any reusable javascript package. I have my template working properly now but it has my JS and CSS includes in my master page and I would rather only include these things if I actually need them: <link rel="stylesheet" type="text/css" href="../../Content/JSCal2-1.7/jscal2.css" /> <link rel="stylesheet" type="text/css" href="../../Content/JSCal2-1.7/border-radius.css" /> <script type="text/javascript" src="../../Scripts/JSCal2-1.7/jscal2.js"></script> <script type="text/javascript" src="../../Scripts/JSCal2-1.7/lang/en.js"></script> So obviously I could just put these lines into my template, but then if I have a screen that has 5 DateTimePickers, then this content would be duplicated 5 times which wouldn't be ideal. Anyways, I still want my View's Template to trigger this code being put into the <head> of my page. While it is completely unrelated to my asking this question, I thought I'd share my template on here (so far) in case it's useful in any way: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<DateTime>" %> <%= Html.TextBoxFor(model => Model) %> <input type="button" id="<%= ViewData.TemplateInfo.GetFullHtmlFieldId("cal-trigger") %>" value="..." /> <script type="text/javascript"> var <%= ViewData.TemplateInfo.GetFullHtmlFieldId("cal") %> = Calendar.setup({ trigger : "<%= ViewData.TemplateInfo.GetFullHtmlFieldId(string.Empty) %>", inputField : "<%= ViewData.TemplateInfo.GetFullHtmlFieldId(string.Empty) %>", onSelect : function() { this.hide(); }, showTime : 12, selectionType : Calendar.SEL_SINGLE, dateFormat : '%o/%e/%Y %l:%M %P' }); </script>

    Read the article

  • What is the scope of CONTEXT_INFO in SQL Server?

    - by JasonS
    I am using CONTEXT_INFO to pass a username to a delete trigger for the purposes of an audit/history table. I'm trying to understand the scope of CONTEXT_INFO and if I am creating a potential race condition. Each of my database tables has a stored proc to handle deletes. The delete stored proc takes userId as an parameter, and sets CONTEXT_INFO to the userId. My delete trigger then grabs the CONTEXT_INFO and uses that to update an audit table that indicates who deleted the row(s). The question is, if two deletes sprocs from different users are executing at the same time, can CONTEXT_INFO set in one of the sprocs be consumed by the trigger fired by the other sproc? I've seen this article http://msdn.microsoft.com/en-us/library/ms189252.aspx but I'm not clear on the scope of sessions and batches in SQL Server which is key to the article being helpful! I'd post code, but short on time at the moment. I'll edit later if this isn't clear enough. Thanks in advance for any help.

    Read the article

  • Using a Case statement within the values section of an Insert statement

    - by mattgcon
    Please forgive my ignorance and poor SQL programming skills but I am normally a basic SQL developer. I need to create a trigger off the insertion of data in one table to insert different data into another table. Within this trigger I need to insert certain data into the new table based upon values within the newly inserted data from the original table. I am totally confused on this. i thought I would be creative and use a case statement within teh Values section but it is not working. Can anyone please help me on this? (below is the code for the trigger that I have as of now) INSERT INTO dbo.WebOnlineUserPeopleDashboard ( ONLINE_USERACCOUNT_ID, ONLINE_ROOMS_DIRECTORY, ONLINE_ROOMS_LIST, ONLINE_ROOMS_PLACEMENT, ONLINE_ROOMS_MANAGEMENT, ONLINE_MAILINGLIST_DIRECTORY, ONLINE_MAILINGLIST_LIST, ONLINE_MAILINGLIST_MEMBERS, ONLINE_MAILINGLIST_MANAGER, ONLINE_PEOPLESEARCH_DIRECTORY ) VALUES IF (SELECT ONLINE_PEOPLE_FULL_ACCESS FROM INSERTED) = 1 BEGIN SELECT ONLINE_USERACCOUNT_ID, 1, 1, 1, 1, 1, 1, 1, 1, 1 FROM INSERTED END ELSE IF (SELECT ONLINE_PEOPLE_FULL_ACCESS FROM INSERTED) = 0 BEGIN SELECT ONLINE_USERACCOUNT_ID, 0, 0, 0, 0, 0, 0, 0, 0, 0 FROM INSERTED END ELSE BEGIN SELECT ONLINE_USERACCOUNT_ID, CASE --DIRECTORY WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_FULL_ACCESS = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_VIEW = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_ADD = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_UPDATE = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_DELETE = 1 THEN 1 WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_FULL_ACCESS = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_VIEW = 1 THEN 1 WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_VIEW = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_ADD = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_UPDATE = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_DELETE = 1 THEN 1 WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_ADD = 0 AND ONLINE_PEOPLE_ROOMS_PLACEMENT_UPDATE = 0 AND ONLINE_PEOPLE_ROOMS_PLACEMENT_DELETE = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_ROOMS_MANAGEMENT_FULL_ACCESS = 1 THEN 1 WHEN ONLINE_PEOPLE_ROOMS_MANAGEMENT_FULL_ACCESS = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_MAILING_LISTS_FULL_ACCESS = 1 OR ONLINE_PEOPLE_MAILING_LISTS_VIEW = 1 OR ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_ADD = 1 OR ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_UPDATE = 1 OR ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_DELETE = 1 THEN 1 WHEN ONLINE_PEOPLE_MAILING_LISTS_FULL_ACCESS = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_MAILING_LISTS_VIEW = 1 THEN 1 WHEN ONLINE_PEOPLE_MAILING_LISTS_VIEW = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_ADD = 1 OR ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_UPDATE = 1 OR ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_DELETE = 1 THEN 1 WHEN ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_ADD = 0 AND ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_UPDATE = 0 AND ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_DELETE = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_MAILING_LISTS_ADD = 1 OR ONLINE_PEOPLE_MAILING_LISTS_UPDATE = 1 OR ONLINE_PEOPLE_MAILING_LISTS_DELETE = 1 THEN 1 WHEN ONLINE_PEOPLE_MAILING_LISTS_ADD = 1 OR ONLINE_PEOPLE_MAILING_LISTS_UPDATE = 1 OR ONLINE_PEOPLE_MAILING_LISTS_DELETE = 1 THEN 0 END, CASE WHEN ONLINE_PEOPLE_PEOPLE_SEARCH = 1 THEN 1 WHEN ONLINE_PEOPLE_PEOPLE_SEARCH = 0 THEN 0 END FROM INSERTED END END

    Read the article

  • XAML itemscontrol visibility

    - by Sam
    Hello, I have a ItemsControl in my XAML code. When some trigger occur i want to collapse the full itemsControl, so all the elements. <ItemsControl Name="VideoViewControl" ItemsSource="{Binding Videos}"> <ItemsControl.ItemsPanel> <ItemsPanelTemplate> <WrapPanel ItemHeight="120" ItemWidth="160" Name="wrapPanel1"/> </ItemsPanelTemplate> </ItemsControl.ItemsPanel> <ItemsControl.ItemTemplate> <DataTemplate> <views:VideoInMenuView /> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> The trigger: <DataTrigger Value="videos" Binding="{Binding RelativeSource={RelativeSource Mode=FindAncestor, AncestorType={x:Type UserControl}, AncestorLevel=1}, Path=DataContext.VideosEnable}"> <Setter Property="ScrollViewer.Visibility" Value="Visible" TargetName="test1" /> <Setter Property="ScrollViewer.Visibility" Value="Collapsed" TargetName="test2" /> <Setter Property="WrapPanel.Visibility" Value="Collapsed" TargetName="wrapPanel1" /> </DataTrigger> When I add the last setter the program crashes. Without this last setter it works fine but no visibility change.... What is wrong with this code? What is the write method to collapse all the elements of a ItemsControl with a trigger?

    Read the article

  • WPF DataTemplate - Overlay

    - by David Ward
    I have a class that I need to provide the datatemplate for. Currently I have two datatemplates, one for when Enabled == true and one for when Enabled == false. The datatemplate for the class is actually the one below which changes the template used based on a trigger: <DataTemplate x:Key="ActionNodeTemplateSelector"> <ContentPresenter Content="{Binding}" Name="cp" /> <DataTemplate.Triggers> <DataTrigger Binding="{Binding Enabled}" Value="True"> <Setter TargetName="cp" Property="ContentTemplate" Value="{StaticResource ActionNodeTemplate}" /> </DataTrigger> <DataTrigger Binding="{Binding Enabled}" Value="False"> <Setter TargetName="cp" Property="ContentTemplate" Value="{StaticResource ActionNodeDisabledTemplate}" /> </DataTrigger> </DataTemplate.Triggers> </DataTemplate> This all works well. However, now I want to also display an image overlay if the "Completed" property is true and a different image if it is incomplete. I could easily carry on using my approach to trigger on the Completed property as well but this would double the number of templates and feels wrong. Is there a way that I could have a trigger on my DataTemplate that will allow me to overlay the correct image over the existing template which is based on the Enabled property?

    Read the article

  • Jquery event to fire in ajax loaded content, IE & FF problem

    - by Sylph
    Hello, I'm trying to trigger onclick events in an ajax loaded content but it doesn't seem to work. Here is my code :- <li><a href="#" id="subtopic">Title</a></li> <script type="text/javascript"> $(function() { $("#subtopic").click(function() { url: test.html ,success: function(data) { $('#result').html(data); $(".filetree").treeview(); $('#result').click(); $('#result').trigger("update"); } }); }); }); </script> In my loaded content, :- <a href="#" id="addSubTopic">Click here</a> <script type="text/javascript"> $(document).ready(function() { alert("000"); }); $(function() { $("#addSubTopic").click(function() { alert("0"); $.ajax({ url: url, success: function(data) { $("#listRes").append(data); } }); }); }); </script> In IE, it works fine, the click works and I can get the "alert" but in Firefox, the whole thing just go dead. However, my jquery plugin for $(".filetree").treeview(); works fine for both IE and FF. Any advice? I have tried using .live, .trigger("update"). Thanks in advance

    Read the article

  • Multhreading in Java

    - by Vijay Selvaraj
    I'm working with core java and IBM Websphere MQ 6.0. We have a standalone module say DBcomponent that hits the database and fetches a resultset based on the runtime query. The query is passed to the application via MQ messaging medium. We have a trigger configured for the queue which invokes the DBComponent whenever a message is available in the queue. The DBComponent consumes the message, constructs the query and returns the resultset to another queue. In this overall process we use log4j to log statements on a log file for auditing. The connection is pooled to the database using Apache pool. I am trying to check whether the log messages are logged correctly using a sample program. The program places the input message to the queue and checks for the logs in the log file. Its expected for the trigger method invocation to complete before i try to check for the message in log file, but every time my program to check for log message gets executed first leading my check to failure. Even if i introduce a Thread.sleep(time) doesn't solves the case. How can i make it to keep my method execution waiting until the trigger operation completes? Any suggestion will be helpful.

    Read the article

  • Jquery Using Jeditable and activating on click

    - by BandonRandon
    I found this to be almost exactly what I'm trying to do. I'm using Jeditable I can get the default setup to work. I've also been able to get the code in the forum above to work. I believe my problem is that because I'm using a table I need to so something else to select the previous element. Here is my HTML <table> <tr> <td width="5%"><input class="cat_checkbox" type="checkbox" name='delete_cat[]' value='<?php echo("$cat_data[cat_id]");?>' /></td> <td width="90%" class="edit_cat_title" id='unique_id'>Category</td> <td width="5%"><a href="#" class="edit_cat_title_trigger"><img src="images/edit.gif" border="0"></a></td> </tr> </table> and here is my JQuery: //modify title content on the fly $('.edit_cat_title').editable('action.php', { name : 'cat_title', indicator : 'Saving...', submit : 'OK', cancel : 'Cancel', tooltip:'click to edit', event : 'edit' }); //trigger with the click of the edit image $(".edit_cat_title_trigger").bind("click", function() { $(this).prev().trigger("edit_cat_title"); }); I know I probably should be able to figure it out and I know all i have to do is change $(this).prev().trigger("edit_cat_title"); to the right thing but I'm still really new to Jquery. Thanks

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >