Search Results

Search found 7116 results on 285 pages for 'nested queries'.

Page 69/285 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • How to get nested chain of objects in Linq and MVC2 application?

    - by Anders Svensson
    I am getting all confused about how to solve this problem in Linq. I have a working solution, but the code to do it is way too complicated and circular I think: I have a timesheet application in MVC 2. I want to query the database that has the following tables (simplified): Project Task TimeSegment The relationships are as follows: A project can have many tasks and a task can have many timesegments. I need to be able to query this in different ways. An example is this: A View is a report that will show a list of projects in a table. Each project's tasks will be listed followed by a Sum of the number of hours worked on that task. The timesegment object is what holds the hours. Here's the View: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Report.Master" Inherits="System.Web.Mvc.ViewPage<Tidrapportering.ViewModels.MonthlyReportViewModel>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Månadsrapport </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h1> Månadsrapport</h1> <div style="margin-top: 20px;"> <span style="font-weight: bold">Kund: </span> <%: Model.Customer.CustomerName %> </div> <div style="margin-bottom: 20px"> <span style="font-weight: bold">Period: </span> <%: Model.StartDate %> - <%: Model.EndDate %> </div> <div style="margin-bottom: 20px"> <span style="font-weight: bold">Underlag för: </span> <%: Model.Employee %> </div> <table class="mainTable"> <tr> <th style="width: 25%"> Projekt </th> <th> Specifikation </th> </tr> <% foreach (var project in Model.Projects) { %> <tr> <td style="vertical-align: top; padding-top: 10pt; width: 25%"> <%:project.ProjectName %> </td> <td> <table class="detailsTable"> <tr> <th> Aktivitet </th> <th> Timmar </th> <th> Ex moms </th> </tr> <% foreach (var task in project.CurrentTasks) {%> <tr class="taskrow"> <td class="task" style="width: 40%"> <%: task.TaskName %> </td> <td style="width: 30%"> <%: task.TaskHours.ToString()%> </td> <td style="width: 30%"> <%: String.Format("{0:C}", task.Cost)%> </td> </tr> <% } %> </table> </td> </tr> <% } %> </table> <table class="summaryTable"> <tr> <td style="width: 25%"> </td> <td> <table style="width: 100%"> <tr> <td style="width: 40%"> Totalt: </td> <td style="width: 30%"> <%: Model.TotalHours.ToString() %> </td> <td style="width: 30%"> <%: String.Format("{0:C}", Model.TotalCost)%> </td> </tr> </table> </td> </tr> </table> <div class="price"> <table> <tr> <td>Moms: </td> <td style="padding-left: 15px;"> <%: String.Format("{0:C}", Model.VAT)%> </td> </tr> <tr> <td>Att betala: </td> <td style="padding-left: 15px;"> <%: String.Format("{0:C}", Model.TotalCostAndVAT)%> </td> </tr> </table> </div> </asp:Content> Here's the action method: [HttpPost] public ActionResult MonthlyReports(FormCollection collection) { MonthlyReportViewModel vm = new MonthlyReportViewModel(); vm.StartDate = collection["StartDate"]; vm.EndDate = collection["EndDate"]; int customerId = Int32.Parse(collection["Customers"]); List<TimeSegment> allTimeSegments = GetTimeSegments(customerId, vm.StartDate, vm.EndDate); vm.Projects = GetProjects(allTimeSegments); vm.Employee = "Alla"; vm.Customer = _repository.GetCustomer(customerId); vm.TotalCost = vm.Projects.SelectMany(project => project.CurrentTasks).Sum(task => task.Cost); //Corresponds to above foreach vm.TotalHours = vm.Projects.SelectMany(project => project.CurrentTasks).Sum(task => task.TaskHours); vm.TotalCostAndVAT = vm.TotalCost * 1.25; vm.VAT = vm.TotalCost * 0.25; return View("MonthlyReport", vm); } And the "helper" methods: public List<TimeSegment> GetTimeSegments(int customerId, string startdate, string enddate) { var timeSegments = _repository.TimeSegments .Where(timeSegment => timeSegment.Customer.CustomerId == customerId) .Where(timeSegment => timeSegment.DateObject.Date >= DateTime.Parse(startdate) && timeSegment.DateObject.Date <= DateTime.Parse(enddate)); return timeSegments.ToList(); } public List<Project> GetProjects(List<TimeSegment> timeSegments) { var projectGroups = from timeSegment in timeSegments group timeSegment by timeSegment.Task into g group g by g.Key.Project into pg select new { Project = pg.Key, Tasks = pg.Key.Tasks }; List<Project> projectList = new List<Project>(); foreach (var group in projectGroups) { Project p = group.Project; foreach (var task in p.Tasks) { task.CurrentTimeSegments = timeSegments.Where(ts => ts.TaskId == task.TaskId).ToList(); p.CurrentTasks.Add(task); } projectList.Add(p); } return projectList; } Again, as I mentioned, this works, but of course is really complex and I get confused myself just looking at it even now that I'm coding it. I sense there must be a much easier way to achieve what I want. Basically you can tell from the View what I want to achieve: I want to get a collection of projects. Each project should have it's associated collection of tasks. And each task should have it's associated collection of timesegments for the specified date period. Note that the projects and tasks selected must also only be the projects and tasks that have the timesegments for this period. I don't want all projects and tasks that have no timesegments within this period. It seems the group by Linq query beginning the GetProjects() method sort of achieves this (if extended to have the conditions for date and so on), but I can't return this and pass it to the view, because it is an anonymous object. I also tried creating a specific type in such a query, but couldn't wrap my head around that either... I hope there is something I'm missing and there is some easier way to achieve this, because I need to be able to do several other different queries as well eventually. I also don't really like the way I solved it with the "CurrentTimeSegments" properties and so on. These properties don't really exist on the model objects in the first place, I added them in partial classes to have somewhere to put the filtered results for each part of the nested object chain... Any ideas?

    Read the article

  • Exploring packages in code

    In my previous post Searching for tasks with code you can see how to explore the control flow side of packages, drilling down through containers, task, and event handlers, but it didn’t cover the data flow. I recently saw a post on the MSDN forum asking how to edit an existing package programmatically, and the sticking point was how to find the the data flow and the components inside. This post builds on some of the previous code and shows how you can explore all objects inside a package. I took the sample Task Search application I’d written previously, and came up with a totally pointless little console application that just walks through the package and writes out the basic type and name of every object it finds, starting with the package itself e.g. Package – MyPackage . The sample package we used last time showed nested objects as well an event handler; a OnPreExecute event tucked away on the task SQL In FEL. The output of this sample tool would look like this: PackageObjects v1.0.0.0 (1.0.0.26627) Copyright (C) 2009 Konesans Ltd Processing File - Z:\Users\Darren Green\Documents\Visual Studio 2005\Projects\SSISTestProject\EventsAndContainersWithExe cSQLForSearch.dtsx Package - EventsAndContainersWithExecSQLForSearch For Loop - FOR Counter Loop Task - SQL In Counter Loop Sequence Container - SEQ For Each Loop Wrapper For Each Loop - FEL Simple Loop Task - SQL In FEL Task - SQL On Pre Execute for FEL SQL Task Sequence Container - SEQ Top Level Sequence Container - SEQ Nested Lvl 1 Sequence Container - SEQ Nested Lvl 2 Task - SQL In Nested Lvl 2 Task - SQL In Nested Lvl 1 #1 Task - SQL In Nested Lvl 1 #2 Connection Manager – LocalHost The code is very similar to what we had previously, but there are a couple of extra bits to deal with connections and to look more closely at a task and see if it is a Data Flow task. For connections your just examine the package's Connections collection as shown in the abridged snippets below. First you can see the call to the ProcessConnections method, followed by the method itself. // Load the package file Application application = new Application(); using (Package package = application.LoadPackage(filename, null)) { // Write out the package name Console.WriteLine("Package - {0}", package.Name); ... More ... // Look and the connections ProcessConnections(package.Connections); } private static void ProcessConnections(Connections connections) { foreach (ConnectionManager connectionManager in connections) { Console.WriteLine("Connection Manager - {0}", connectionManager.Name); } } What we didn’t see in the sample output above was anything to do with the Data Flow, but rest assured the code now handles it too. The following snippet shows how each task is examined to see if it is a Data Flow task, and if so we can then loop through all of the components inside the data flow. private static void ProcessTaskHost(TaskHost taskHost) { if (taskHost == null) { return; } Console.WriteLine("Task - {0}", taskHost.Name); // Check if the task is a Data Flow task MainPipe pipeline = taskHost.InnerObject as MainPipe; if (pipeline != null) { ProcessPipeline(pipeline); } } private static void ProcessPipeline(MainPipe pipeline) { foreach (IDTSComponentMetaData90 componentMetadata in pipeline.ComponentMetaDataCollection) { Console.WriteLine("Pipeline Component - {0}", componentMetadata.Name); // If you wish to make changes to the component then you should really use the managed wrapper. // CManagedComponentWrapper wrapper = componentMetadata.Instantiate(); // wrapper.SetComponentProperty("PropertyName", "Value"); } } Hopefully you can see how we get a reference to the Data Flow task, and then use the ComponentMetaDataCollection to find out what components we have inside the pipeline. If you wanted to know more about the component you could look at the ObjectType or ComponentClassID properties. After that it gets a bit harder and you should get a reference to the wrapper object as the comment suggest and start using the properties, just like you would in the create packages samples, see our Code Development category for some for these examples. Download Sample code project PackageObjects.zip (5KB)

    Read the article

  • Exploring packages in code

    In my previous post Searching for tasks with code you can see how to explore the control flow side of packages, drilling down through containers, task, and event handlers, but it didn’t cover the data flow. I recently saw a post on the MSDN forum asking how to edit an existing package programmatically, and the sticking point was how to find the the data flow and the components inside. This post builds on some of the previous code and shows how you can explore all objects inside a package. I took the sample Task Search application I’d written previously, and came up with a totally pointless little console application that just walks through the package and writes out the basic type and name of every object it finds, starting with the package itself e.g. Package – MyPackage . The sample package we used last time showed nested objects as well an event handler; a OnPreExecute event tucked away on the task SQL In FEL. The output of this sample tool would look like this: PackageObjects v1.0.0.0 (1.0.0.26627) Copyright (C) 2009 Konesans Ltd Processing File - Z:\Users\Darren Green\Documents\Visual Studio 2005\Projects\SSISTestProject\EventsAndContainersWithExe cSQLForSearch.dtsx Package - EventsAndContainersWithExecSQLForSearch For Loop - FOR Counter Loop Task - SQL In Counter Loop Sequence Container - SEQ For Each Loop Wrapper For Each Loop - FEL Simple Loop Task - SQL In FEL Task - SQL On Pre Execute for FEL SQL Task Sequence Container - SEQ Top Level Sequence Container - SEQ Nested Lvl 1 Sequence Container - SEQ Nested Lvl 2 Task - SQL In Nested Lvl 2 Task - SQL In Nested Lvl 1 #1 Task - SQL In Nested Lvl 1 #2 Connection Manager – LocalHost The code is very similar to what we had previously, but there are a couple of extra bits to deal with connections and to look more closely at a task and see if it is a Data Flow task. For connections your just examine the package's Connections collection as shown in the abridged snippets below. First you can see the call to the ProcessConnections method, followed by the method itself. // Load the package file Application application = new Application(); using (Package package = application.LoadPackage(filename, null)) { // Write out the package name Console.WriteLine("Package - {0}", package.Name); ... More ... // Look and the connections ProcessConnections(package.Connections); } private static void ProcessConnections(Connections connections) { foreach (ConnectionManager connectionManager in connections) { Console.WriteLine("Connection Manager - {0}", connectionManager.Name); } } What we didn’t see in the sample output above was anything to do with the Data Flow, but rest assured the code now handles it too. The following snippet shows how each task is examined to see if it is a Data Flow task, and if so we can then loop through all of the components inside the data flow. private static void ProcessTaskHost(TaskHost taskHost) { if (taskHost == null) { return; } Console.WriteLine("Task - {0}", taskHost.Name); // Check if the task is a Data Flow task MainPipe pipeline = taskHost.InnerObject as MainPipe; if (pipeline != null) { ProcessPipeline(pipeline); } } private static void ProcessPipeline(MainPipe pipeline) { foreach (IDTSComponentMetaData90 componentMetadata in pipeline.ComponentMetaDataCollection) { Console.WriteLine("Pipeline Component - {0}", componentMetadata.Name); // If you wish to make changes to the component then you should really use the managed wrapper. // CManagedComponentWrapper wrapper = componentMetadata.Instantiate(); // wrapper.SetComponentProperty("PropertyName", "Value"); } } Hopefully you can see how we get a reference to the Data Flow task, and then use the ComponentMetaDataCollection to find out what components we have inside the pipeline. If you wanted to know more about the component you could look at the ObjectType or ComponentClassID properties. After that it gets a bit harder and you should get a reference to the wrapper object as the comment suggest and start using the properties, just like you would in the create packages samples, see our Code Development category for some for these examples. Download Sample code project PackageObjects.zip (5KB)

    Read the article

  • sys.dm_exec_query_stats interaction with recompilation

    - by Sam Saffron
    We use sys.dm_exec_query_stats to track down slow queries and queries that are IO offenders. This works great, we get a lot of very insightful stats. It is clear this is not as accurate as running a profiler trace, as you have no idea when SQL Server will decide to chuck out a an execution plan. We have quite a few queries where the wrong execution plan is cached. For example queries like the following: SELECT TOP 30 a.Id FROM Posts a JOIN Posts q ON q.Id = a.ParentId JOIN PostTags pt ON q.Id = pt.PostId WHERE a.PostTypeId = 2 AND a.DeletionDate IS NULL AND a.CommunityOwnedDate IS NULL AND a.CreationDate @date AND LEN(a.Body) 300 AND pt.Tag = @tag AND a.Score 0 ORDER BY a.Score DESC The problem is that the ideal plan really depends on the date selected (screenshot of ideal plan): However if the wrong plan is cached, it totally chokes when the date range is big: (notice the big fat lines) To overcome this we were recommended to use either OPTION (OPTIMIZE FOR UNKNOWN) or OPTION (RECOMPILE) OPTIMIZE FOR UNKNOWN results in a slightly better plan, which is far from optimal. Executions are tracked in sys.dm_exec_query_stats. RECOMPILE results in the best plan being chosen, however no execution counts and stats are tracked in sys.dm_exec_query_stats. Is there another DMV we could use to track stats on queries with OPTION (RECOMPILE)? Is this behavior by-design? Is there another way we can for recompilation while keeping stats tracked in sys.dm_exec_query_stats? Note: the framework will always execute parameterized queries using sp_executesql

    Read the article

  • Office 2010 Trust Center settings: How to enable data connections in the "old" way?

    - by GSerg
    We're planning an upgrade Office 2003 - 2010 and have identified a big problem. In Office 2003, if the workbook you're opening contains a query table that fetches data from a data source automatically (upon file open or in certain intervals), then a security dialog pops up - whether you want to allow that. If you say Yes, the queries will refresh automatically when they need to. If you say No, the queries will not refresh automatically, neither on file open nor on time intervals, but you will be able to refresh any of them manually at any time by right-clicking and selecting Refresh. There is also a registry parameter to say, Don't display that dialog, just allow the queries. This is exactly what we want. On users' computers we have the registry parameter applied, so the users never see any dialogs. On developers' computers the parameter is not applied, so every time a file is opened the developer decides whether to allow the auto-refreshing for the current session. Usually the answer is No, because for developing, it is essential to not have quieres refresh when they want to, but instead, refresh them when the developer wants. The problem is that in Office 2010 which we are testing we can't find a way to achieve this functionality: The allow/disallow messages are now grouped into one yellow button, that either allows everything or disallows everything (including, say, macros, if macro security is set to "Disable, but ask"). If you don't click the yellow Allow button, the queries are disabled completely, not just for automatic execution. You cannot right-click and refresh a particular query -- doing that would summon a security dialog prompting for enabling queries, and if you say Yes, all queries in the document will be enabled for auto-execution and will start executing immediately. This sort of ruins our development environment. Is there a way to get the trust thingies in Office 2010 to work in the same way as before? Is there a yet another registry parameter to say, Prompt for auto-refresh, but allow manual refresh even when auto-refresh is disabled?

    Read the article

  • format an xml string in Ruby

    - by user1476512
    given an xml string like this : <some><nested><xml>value</xml></nested></some> what's the best option(using ruby) to format it readable like : <some> <nested> <xml>value</xml> </nested> </some> I've found an answer here: what's the best way to format an xml string in ruby?, which is really helpful. But it formats xml like: <some> <nested> <xml> value </xml> </nested> </some> As my xml string is a little big in length. So it is not readable in this format. Thanks in advance!

    Read the article

  • Will new Twitter API 1.1 allow hashtag/tweet/trend queries without any authentication, i.e. for a client that does not use an user's account at all?

    - by P5music
    I see that, even not being logged in Twitter with an account, if I google hashtags or twitter accounts, twitter show them. I think it should be also possible to get those tweets programmatically but I do not know it for sure, so I ask for confirmation here, especially for the future with the new Twitter API resctrictions. I mean, will it be possible to get tweets from hashtags or accounts without logging in an user account, and so not wanting to access the user settings, subscriptions, etc (because I do not need it), thus not having to respect any token limit? I found these API 1.1 faqs, have I to be concerned? Will an application have to request user authorization just to make public API calls? When API v1.1 is released, user authorization (and access tokens) are required for all API 1.1 requests. In the weeks following release, some methods will require only application-based authentication for certain "userless" contexts. Will an application have to request user authorization just to make public API calls? When API v1.1 is released, user authorization (and access tokens) are required for all API 1.1 requests. In the weeks following release, some methods will require only application-based authentication for certain "userless" contexts. Will the Search API require authentication? The Search API is now part of the official REST API in version 1.1. In addition to serving results in a format consistent with other Tweet resources, usage will also require authentication.

    Read the article

  • What are the Options for Storing Hierarchical Data in a Relational Database?

    - by orangepips
    Good Overviews One more Nested Intervals vs. Adjacency List comparison: the best comparison of Adjacency List, Materialized Path, Nested Set and Nested Interval I've found. Models for hierarchical data: slides with good explanations of tradeoffs and example usage Representing hierarchies in MySQL: very good overview of Nested Set in particular Hierarchical data in RDBMSs: most comprehensive and well organized set of links I've seen, but not much in the way on explanation Options Ones I am aware of and general features: Adjacency List: Columns: ID, ParentID Easy to implement. Cheap node moves, inserts, and deletes. Expensive to find level (can store as a computed column), ancestry & descendants (Bridge Hierarchy combined with level column can solve), path (Lineage Column can solve). Use Common Table Expressions in those databases that support them to traverse. Nested Set (a.k.a Modified Preorder Tree Traversal) First described by Joe Celko - covered in depth in his book Trees and Hierarchies in SQL for Smarties Columns: Left, Right Cheap level, ancestry, descendants Compared to Adjacency List, moves, inserts, deletes more expensive. Requires a specific sort order (e.g. created). So sorting all descendants in a different order requires additional work. Nested Intervals Combination of Nested Sets and Materialized Path where left/right columns are floating point decimals instead of integers and encode the path information. Bridge Table (a.k.a. Closure Table: some good ideas about how to use triggers for maintaining this approach) Columns: ancestor, descendant Stands apart from table it describes. Can include some nodes in more than one hierarchy. Cheap ancestry and descendants (albeit not in what order) For complete knowledge of a hierarchy needs to be combined with another option. Flat Table A modification of the Adjacency List that adds a Level and Rank (e.g. ordering) column to each record. Expensive move and delete Cheap ancestry and descendants Good Use: threaded discussion - forums / blog comments Lineage Column (a.k.a. Materialized Path, Path Enumeration) Column: lineage (e.g. /parent/child/grandchild/etc...) Limit to how deep the hierarchy can be. Descendants cheap (e.g. LEFT(lineage, #) = '/enumerated/path') Ancestry tricky (database specific queries) Database Specific Notes MySQL Use session variables for Adjacency List Oracle Use CONNECT BY to traverse Adjacency Lists PostgreSQL ltree datatype for Materialized Path SQL Server General summary 2008 offers HierarchyId data type appears to help with Lineage Column approach and expand the depth that can be represented.

    Read the article

  • compiling a linq to sql query

    - by frenchie
    Hi, I have queries that are built like this: public static List<MyObjectModel> GetData (int MyParam) { using (DataModel MyModelDC = new DataModel()) { var MyQuery = from.... select MyObjectModel { ...} } return new List<MyObjectModel> (MyQuery) } } It seems that using compiled linq-to-sql queries are about as fast as stored procedures and so the goal is to convert these queries into compiled queries. What's the syntax for this? Thanks.

    Read the article

  • Plan Caching and Query Memory Part I – When not to use stored procedure or other plan caching mechanisms like sp_executesql or prepared statement

    - by sqlworkshops
      The most common performance mistake SQL Server developers make: SQL Server estimates memory requirement for queries at compilation time. This mechanism is fine for dynamic queries that need memory, but not for queries that cache the plan. With dynamic queries the plan is not reused for different set of parameters values / predicates and hence different amount of memory can be estimated based on different set of parameter values / predicates. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Sort with examples. It is recommended to read Plan Caching and Query Memory Part II after this article which covers Hash Match operations.   When the plan is cached by using stored procedure or other plan caching mechanisms like sp_executesql or prepared statement, SQL Server estimates memory requirement based on first set of execution parameters. Later when the same stored procedure is called with different set of parameter values, the same amount of memory is used to execute the stored procedure. This might lead to underestimation / overestimation of memory on plan reuse, overestimation of memory might not be a noticeable issue for Sort operations, but underestimation of memory will lead to spill over tempdb resulting in poor performance.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a stored procedure does not change significantly based on predicates.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts   Enough theory, let’s see an example where we sort initially 1 month of data and then use the stored procedure to sort 6 months of data.   Let’s create a stored procedure that sorts customers by name within certain date range.   --Example provided by www.sqlworkshops.com create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1)       end go Let’s execute the stored procedure initially with 1 month date range.   set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 48 ms to complete.     The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.       The estimated number of rows, 43199.9 is similar to actual number of rows 43200 and hence the memory estimation should be ok.       There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 679 ms to complete.      The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.      The estimated number of rows, 43199.9 is way different from the actual number of rows 259200 because the estimation is based on the first set of parameter value supplied to the stored procedure which is 1 month in our case. This underestimation will lead to sort spill over tempdb, resulting in poor performance.      There was Sort Warnings in SQL Profiler.    To monitor the amount of data written and read from tempdb, one can execute select num_of_bytes_written, num_of_bytes_read from sys.dm_io_virtual_file_stats(2, NULL) before and after the stored procedure execution, for additional information refer to the webcast: www.sqlworkshops.com/webcasts.     Let’s recompile the stored procedure and then let’s first execute the stored procedure with 6 month date range.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts for further details.   exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go Now the stored procedure took only 294 ms instead of 679 ms.    The stored procedure was granted 26832 KB of memory.      The estimated number of rows, 259200 is similar to actual number of rows of 259200. Better performance of this stored procedure is due to better estimation of memory and avoiding sort spill over tempdb.      There was no Sort Warnings in SQL Profiler.       Now let’s execute the stored procedure with 1 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 49 ms to complete, similar to our very first stored procedure execution.     This stored procedure was granted more memory (26832 KB) than necessary memory (6656 KB) based on 6 months of data estimation (259200 rows) instead of 1 month of data estimation (43199.9 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is 6 months in this case. This overestimation did not affect performance, but it might affect performance of other concurrent queries requiring memory and hence overestimation is not recommended. This overestimation might affect performance Hash Match operations, refer to article Plan Caching and Query Memory Part II for further details.    Let’s recompile the stored procedure and then let’s first execute the stored procedure with 2 day date range. exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-02' go The stored procedure took 1 ms.      The stored procedure was granted 1024 KB based on 1440 rows being estimated.      There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go   The stored procedure took 955 ms to complete, way higher than 679 ms or 294ms we noticed before.      The stored procedure was granted 1024 KB based on 1440 rows being estimated. But we noticed in the past this stored procedure with 6 month date range needed 26832 KB of memory to execute optimally without spill over tempdb. This is clear underestimation of memory and the reason for the very poor performance.      There was Sort Warnings in SQL Profiler. Unlike before this was a Multiple pass sort instead of Single pass sort. This occurs when granted memory is too low.      Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined date range.   Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, recompile)       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.      The stored procedure with 1 month date range has good estimation like before.      The stored procedure with 6 month date range also has good estimation and memory grant like before because the query was recompiled with current set of parameter values.      The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.     Let’s recreate the stored procedure with optimize for hint of 6 month date range.   --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, optimize for (@CreationDateFrom = '2001-01-01', @CreationDateTo ='2001-06-30'))       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.    The stored procedure with 1 month date range has overestimation of rows and memory. This is because we provided hint to optimize for 6 months of data.      The stored procedure with 6 month date range has good estimation and memory grant because we provided hint to optimize for 6 months of data.       Let’s execute the stored procedure with 12 month date range using the currently cashed plan for 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-12-31' go The stored procedure took 1138 ms to complete.      2592000 rows were estimated based on optimize for hint value for 6 month date range. Actual number of rows is 524160 due to 12 month date range.      The stored procedure was granted enough memory to sort 6 month date range and not 12 month date range, so there will be spill over tempdb.      There was Sort Warnings in SQL Profiler.      As we see above, optimize for hint cannot guarantee enough memory and optimal performance compared to recompile hint.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case. I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.     Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.     Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Nested AccordionItem. Inner AccordionItem do not expand.

    - by Ali
    In Silverlight an AccordionItem is inside another one . When the inner one is selected, it can not expand its parent more which is already expanded to show its own content. I tried to get around it by templating but I was unlucky. Does any one has a solution for it [prefer a solution without code]? <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:layoutPrimitivesToolkit="clr-namespace:System.Windows.Controls.Primitives;assembly=System.Windows.Controls.Layout.Toolkit" xmlns:layoutToolkit="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Layout.Toolkit" xmlns:controlsToolkit="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Toolkit" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" x:Class="NestedAccordion_Silverlight.MainPage" Width="640" Height="480"> <Grid x:Name="LayoutRoot" Background="White"> <layoutToolkit:Accordion BorderBrush="#FF00FF53" SelectionMode="ZeroOrMore"> <layoutToolkit:AccordionItem Header="Header" VerticalAlignment="Top" > <StackPanel VerticalAlignment="Top"> <TextBlock TextWrapping="Wrap" Text="Some content"/> <Button Content="Button" Width="75"/> <layoutToolkit:AccordionItem Header="Inner Accordion1" VerticalAlignment="Top" > <StackPanel VerticalAlignment="Top"> <TextBlock TextWrapping="Wrap" Text="Some content"/> <Button Content="Button" Width="75"/> </StackPanel> </layoutToolkit:AccordionItem> </StackPanel> </layoutToolkit:AccordionItem> <layoutToolkit:AccordionItem Header="Header" VerticalAlignment="Top" > <StackPanel> <TextBlock TextWrapping="Wrap" Text="Some content"/> <Button Content="Button" Width="75"/> </StackPanel> </layoutToolkit:AccordionItem> <layoutToolkit:AccordionItem Header="Header" VerticalAlignment="Top" > <StackPanel> <TextBlock TextWrapping="Wrap" Text="Some content"/> <Button Content="Button" Width="75"/> </StackPanel> </layoutToolkit:AccordionItem> </layoutToolkit:Accordion> </Grid> Is it a bug or I am in a wrong path?

    Read the article

  • c# Nested categories (sub-categories) in the form control “property grid”.

    - by Cathering
    I'm new to C# and I've been trying to design my own program for a while now. I came a across a control named Property Grid, it suits me perfectly and aftering Googling, I managed to find how to split up the various properties into catagories using attritubtes. But I cannot find any information on adding sub-catagories to another catagory. Can anyone shed light on this subject? Thank you.

    Read the article

  • Need help using the DefaultModelBinder for a nested model.

    - by Will
    There are a few related questions, but I can't find an answer that works. Assuming I have the following models: public class EditorViewModel { public Account Account {get;set;} public string SomeSimpleStuff {get;set;} } public class Account { public string AccountName {get;set;} public int MorePrimitivesFollow {get;set;} } and a view that extends ViewPage<EditorViewModel> which does the following: <%= Html.TextBoxFor(model => model.Account.AccountName)%> <%= Html.ValidationMessageFor(model => model.Account.AccountName)%> <%= Html.TextBoxFor(model => model.SomeSimpleStuff )%> <%= Html.ValidationMessageFor(model => model.SomeSimpleStuff )%> and my controller looks like: [HttpPost] public virtual ActionResult Edit(EditorViewModel account) { /*...*/ } How can I get the DefaultModelBinder to properly bind my EditorViewModel? Without doing anything special, I get an empty instance of my EditorViewModel with everything null or default. The closest I've come is by calling UpdateModel manually: [HttpPost] public virtual ActionResult Edit(EditorViewModel account) { account.Account = new Account(); UpdateModel(account.Account, "Account"); // this kills me: UpdateModel(account); This successfully updates my Account property model, but when I call UpdateModel on account (to get the rest of the public properties of my EditorViewModel) I get the completely unhelpful "The model of type ... could not be updated." There is no inner exception, so I can't figure out what's going wrong. What should I do with this?

    Read the article

  • How to Set focus on the 1st Item of Nested Listbox in Silverlight?

    - by Subhen
    Hi , I have a List box which contains another lisbox Inside it. <ListBox x:Name="listBoxParent"> <ListBox.ItemTemplate> <DataTemplate> <Image x:Name="thumbNailImagePath" Source="/this.jpg" Style="{StaticResource ThumbNailPreview}" /> <TextBlock Margin="5" Text="{Binding Smthing}" Style="{StaticResource TitleBlock_Artist}" /> </StackPanel> <StackPanel Style="{StaticResource someStyle}" > <ListBox x:Name="listBoxChild" Loaded="listBoxChild_Loaded" BorderThickness="0"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel> <TextBlock Margin="5" Text="{Binding myText}" Width="300"/> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> <ListBox.ItemsPanel> </ListBox.ItemsPanel> </ListBox> Now while I try to focus to the child List Box : public void listBoxChild_Loaded(object sender, RoutedEventArgs e) { var myListBox = (ListBox)sender; myListBox .ItemsSource = PageVariables.eOUTData;//listboxSongsData; myListBox .SelectedIndex = 0; myListBox .Focus(); } Thanks, Subhen

    Read the article

  • Can the MVVM-Light ViewModelLocator be used in nested ViewModels?

    - by dthrasher
    The Visual Studio 2008 Designer doesn't seem to like UserControls that reference the MVVM-Light ViewModelLocator. I get an error message like: Could not create an instance of type 'MyUserControl'. For example, the following XAML will cause this behavior if MyUserControl uses the ViewModelLocator to establish its DataContext. <Page x:Class="MyProject.Views.MainView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Views="clr-namespace:MyProject.Views" > <Grid> <Views:MyUserControl/> </Grid> </Page> Is the ViewModelLocator only meant to be used in top-level Views?

    Read the article

  • How do I retrieve the values entered in a nested control added dynamically to a webform?

    - by Front Runner
    I have a date range user control with two calendar controls: DateRange.ascx public partial class DateRange { public string FromDate { get { return calFromDate.Text; } set { calFromDate.Text = value; } } public string ToDate { get { return calToDate.Date; } set { calToDate.Text = value; } } } I have another user control in which DateRange control is loaded dynamically: ParamViewer.ascx public partial class ParamViewer { public void Setup(string sControlName) { //Load parameter control Control control = LoadControl("DateRange.ascx"); Controls.Add(control); DateRange dteFrom = (DateRange)control.FindControl("calFromDate"); DateRange dteTo = (DateRange)control.FindControl("calToDate"); } } I have a main page webForm1.aspx where ParamViewer.ascx is added When user enter the dates, they're set correctly in DateRange control. My question is how how do I retrieve the values entered in DateRange (FromDate and ToDate)control from btnSubmit_Click event in webForm1? Please advise. Thank you in advance.

    Read the article

  • 'Invalid column name [ColumnName]' on a nested linq query.

    - by Joe
    I've got the following query: ATable .GroupBy(x=> new {FieldA = x.FieldAID, FieldB = x.FieldBID, FieldC = x.FieldCID}) .Select(x=>new {FieldA = x.Key.FieldA, ..., last_seen = x.OrderByDescending(y=>y.Timestamp).FirstOrDefault().Timestamp}) results in: SqlException: Invalid column name 'FieldAID' x 5 SqlException: Invalid column name 'FieldBID' x 5 SqlException: Invalid column name 'FieldCID' x 1 I've worked out it has to do with the last query to Timestamp because this works: ATable .GroupBy(x=> new {FieldA = x.FieldAID, FieldB = x.FieldBID, FieldC = x.FieldCID}) .Select(x=>new {FieldA = x.Key.FieldA, ..., last_seen = x.OrderByDescending(y=>y.Timestamp).FirstOrDefault()}) The query has been simplified. The purpose is to group by a set of variables and then show the last time this grouping occured in the db. I'm using Linqpad 4 to generate these results so the Timestamp gives me a string whereas FirstOrDefault gives me the whole object which isn't ideal. Update On further testing I've noticed that the number and type of SQLException is related to the class created in the groupby clause. So, ATable .GroupBy(x=> new {FieldA = x.FieldAID}) .Select(x=>new {FieldA = x.Key.FieldA, last_seen = x.OrderByDescending(y=>y.Timestamp).FirstOrDefault()}) results in SqlException: Invalid column name 'FieldAID' x 5

    Read the article

  • Nested Model binding in ASP.NET MVC2. fields_for from rails equivalent

    - by dagda1
    Hi, I am looking for some examples of how to do model binding in ASP.NET MVC2 for COMPLEX objects. All the exmples I can find are of simple objects with no child collections or child objects. If I have an Expense object with a child ExpensePayment object. In rails, child objects are rendered with the HTML name attributes like this: expense[expense_payment][net] Rails uses fields_for to render child objects. How can I accomplish something similar in ASP.NET MVC2? Cheers Paul

    Read the article

  • extjs how to make a nested child using xTemplate when we don't know how deep is it?

    - by Ebo the gordon
    first, sorry if my english bad,.... in my script, variable tplData below is dynamic,... (lets say it generates from database) so, every chid, can have another child. and so on,.... now, i'm stack how to iteration it,.. var tplData = [{ name : 'Naomi White' },{ name : 'Yoko Ono' },{ name : 'John Smith', child : [{ name:'Michael (John\'s son)', child: [{ name : 'Brad (Michael\'s son,John\'s grand son)' },{ name : 'Brid (Michael\'s son,John\'s grand son)', child: [{ name:'Buddy (Brid\'s son,Michael\'s grand son)' }] },{ name : 'Brud (Michael\'s son,John\'s grand son)' }] }] }]; var myTpl = new Ext.XTemplate( '<tpl for=".">', '<div style="background-color: {color}; margin: 10px;">', '<b> Name :</b> {name}<br />', // how to make this over and over every child (while it has ) '<tpl if="typeof child !=\'undefined\'">', '<b> Child : </b>', '<tpl for="child">', '{name} <br />', '</tpl>', '</tpl>', /////////////////////////////////////// '</div>', '</tpl>' ); myTpl.compile(); myTpl.overwrite(document.body, tplData);

    Read the article

  • (ASP.NET) Problem with a repeater nested in a repeater, how to know when it is a itemCommand?

    - by NoProblemBabe
    I have the following problem: Keeping in mind the following structure: <repeater> <updatepanel> <div> <link id="fatherLink" /> </div> <div> <repeater> <link id="childLink"/> </repeater> <div> </updatepanel> </repeater> right? I am using updatepanel, so, when i click in the fatherlink, i put a click method in the server side, so it populates it's child repeater. no problems in there, but I need that the childLink to perform a action on the server side, like take in account some data and then sending to a given page to do something else. When doing this I happen to notice that there are three situations: 1 - First server call, is not a postback it populates the father repeater (no problems here). 2 - Second server call, when the father link is clicked i populate the child repeater. Something like a "fatherLink_Click" function (no problems here). 3 - Third server call, when the child is clicked: i can't seem to know that it is the child's item command, so i can't stop it from databinding all over again, which kills my itemcommand event... (the problem). What can I do?

    Read the article

  • How would I merged nested dictionaries in a list in python?

    - by Kevin
    for example if i had the result [{'Germany': {"Luge - Men's Singles": 'Gold'}}, {'Germany': {"Luge - Men's Singles": 'Silver'}}, {'Italy': {"Luge - Men's Singles": 'Bronze'}}] [{'Germany': {"Luge - Women's Singles": 'Gold'}}, {'Austria': {"Luge - Women's Singles": 'Silver'}}, {'Germany': {"Luge - Women's Singles": 'Bronze'}}] [{'Austria': {'Luge - Doubles': 'Gold'}}, {'Latvia': {'Luge - Doubles': 'Silver'}}, {'Germany': {'Luge - Doubles': 'Bronze'}}] how would I sort this so that all of the events germany and so on had won could be under one single title. i.e germany would be germany:Luge - Men's Singles: Gold, Silver, Luge - Women's Singles: Gold, Bronze, Luge - Doubles: Bronze. thanks for any help

    Read the article

  • Audio playback, creating nested loop for fade in/out.

    - by Dave Slevin
    Hi Folks, First time poster here. A quick question about setting up a loop here. I want to set up a for loop for the first 1/3 of the main loop that will increase a value from .00001 or similar to 1. So I can use it to multiply a sample variable so as to create a fade-in in this simple audio file playback routine. So far it's turning out to be a bit of a head scratcher, any help greatfully recieved. for(i=0; i < end && !feof(fpin); i+=blockframes) { samples = fread(audioblock, sizeof(short), blocksamples, fpin); frames = samples; for(j=0; j < frames; j++) { for (f = 0; f< frames/3 ;f++) { fade = fade--; } output[j] = audioblock[j]/fade; } fwrite(output,sizeof(short), frames, fpoutput); } Apologies, So far I've read and re-write the file successfully. My problem is I'm trying to figure out a way to loop the variable 'fade' so it either increases or decreases to 1, so as I can modify the output variable. I wanted to do this in say 3 stages: 1. From 0 to frames/3 to increace a multiplication factor from .0001 to 1 2. from frames 1/3 to frames 2/3 to do nothing (multiply by 1) and 3. For the factor to decrease again below 1 so as for the output variable to decrease back to the original point. How can I create a loop that will increase and decrease these values over the outside loop?

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >