Search Results

Search found 7897 results on 316 pages for 'partial views'.

Page 64/316 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Silverlight Cream Top Posted Authors July to December, 2010

    - by Dave Campbell
    It's past the first of January, and it's now time to recognize devs that have a large number of posts in Silverlight Cream. Ground Rules I pick what posts are on the blog Only posts that go in the database are included The author has to appear in SC at least 4 of the 6 months considered I averaged the monthly posts and am only showing Authors with an average greater than 1. Here are the Top Posted Authors at Silverlight Cream for July 1, 2010 through December 31, 2010: It is my intention to post a new list sometime shortly after the 1st of every month to recognize the top posted in the previous 6 months, so next up is January 1! Some other metrics for Silverlight Cream: At the time of this posting there are 7304 articles aggregated and searchable by partial Author, partial Title, keywords (in the synopsis), or partial URL. There are also 118 tags by which the articles can be searched. This is an increase of 317 posts over last month. At the time of this posting there are 783 articles tagged wp7dev. This is an increase of 119 posts over last month, or over a third of the posts added. Stay in the 'Light!

    Read the article

  • Silverlight Cream Top Posted Authors August, 2010 to January, 2011

    - by Dave Campbell
    It's *way* past the first of February, and it's now time to recognize devs that have a large number of posts in Silverlight Cream. Ground Rules I pick what posts are on the blog Only posts that go in the database are included The author has to appear in SC at least 4 of the 6 months considered I averaged the monthly posts and am only showing Authors with an average greater than 1. Here are the Top Posted Authors at Silverlight Cream for August 1, 2010 through January 31, 2011: It is my intention to post a new list sometime shortly after the 1st of every month to recognize the top posted in the previous 6 months, so next up is March 1! Some other metrics for Silverlight Cream: At the time of this posting there are 7304 articles aggregated and searchable by partial Author, partial Title, keywords (in the synopsis), or partial URL. There are also 118 tags by which the articles can be searched. This is an increase of 265 posts over last month. At the time of this posting there are 783 articles tagged wp7dev. This is an increase of 155 posts over last month, or over half of the posts added. Stay in the 'Light!

    Read the article

  • Silverlight Cream Top Posted Authors September, 2010 to February, 2011

    - by Dave Campbell
    It's a bit past the first of March, and it's now time to recognize devs that have a large number of posts in Silverlight Cream. Ground Rules I pick what posts are on the blog Only posts that go in the database are included The author has to appear in SC at least 4 of the 6 months considered I averaged the monthly posts and am only showing Authors with an average greater than 1. Here are the Top Posted Authors at Silverlight Cream for September 1, 2010 through February 28, 2011: It is my intention to post a new list sometime shortly after the 1st of every month to recognize the top posted in the previous 6 months, so next up is March 1! Some other metrics for Silverlight Cream: At the time of this posting there are 7672 articles aggregated and searchable by partial Author, partial Title, keywords (in the synopsis), or partial URL. There are also 118 tags by which the articles can be searched. This is an increase of 368 posts over last month. At the time of this posting there are 984 articles tagged wp7dev. This is an increase of 201 posts over last month, or 54% of the posts added. Stay in the 'Light!

    Read the article

  • after return PartialView() Url.Actionlink("Action", "Controller"), the Controller is lost

    - by Johannes
    Well the Question is related to a problem I posted before (http://stackoverflow.com/questions/2403899/asp-net-mvc-partial-view-does-not-call-my-action). In practice I've a partial view which contains a Form, after submitting the Form the Controller returns the Partial View. Well the Problem is if I reload the page which contains the partial view the function <%= Url.Action("ChangePassword", "Account") %> returns "Account/ChangePassword", if I submit the form and the partial is returned by the controller. Using return PartialView() the function <%= Url.Action("ChangePassword", "Account") %> returns only "ChangePassword". Any Idea because? The View looks like: <form action="<%= Url.Action("ChangePassword", "Account") %>" method="post" id="jform"> <div> <fieldset> <legend>Account Information</legend> <p> <label for="currentPassword">Current password:</label> <%= Html.Password("currentPassword") %> <%= Html.ValidationMessage("currentPassword") %> </p> <p> <label for="newPassword">New password:</label> <%= Html.Password("newPassword") %> <%= Html.ValidationMessage("newPassword") %> </p> <p> <label for="confirmPassword">Confirm new password:</label> <%= Html.Password("confirmPassword") %> <%= Html.ValidationMessage("confirmPassword") %> </p> <p> <input type="submit" value="Change Password" /> </p> </fieldset> </div> </form> </div> <script> $(function() { $('#jform').submit(function() { $('#jform').ajaxSubmit({ target: '#FmChangePassword' }); return false; }); }); </script> Part of the Controller: if (!ValidateChangePassword(currentPassword, newPassword, confirmPassword)) { return PartialView(ViewData); }

    Read the article

  • ASP.NET MVC, Webform hybrid

    - by Greg Ogle
    We (me and my team) have a ASP.NET MVC application and we are integrating a page or two that are Web Forms. We are trying to reuse the Master Page from our MVC part of the app in the WebForms part. We have found a way of rendering an MVC partial view in web forms, which works great, until we try and do a postback, which is the reason for using a WebForm. The Error: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. The Code to render the partial view from a WebForm (credited to "How to include a partial view inside a webform"): public static class WebFormMVCUtil { public static void RenderPartial(string partialName, object model) { //get a wrapper for the legacy WebForm context var httpCtx = new HttpContextWrapper(System.Web.HttpContext.Current); //create a mock route that points to the empty controller var rt = new RouteData(); rt.Values.Add("controller", "WebFormController"); //create a controller context for the route and http context var ctx = new ControllerContext( new RequestContext(httpCtx, rt), new WebFormController()); //find the partial view using the viewengine var view = ViewEngines.Engines.FindPartialView(ctx, partialName).View; //create a view context and assign the model var vctx = new ViewContext(ctx, view, new ViewDataDictionary { Model = model }, new TempDataDictionary()); //ERROR OCCURS ON THIS LINE view.Render(vctx, System.Web.HttpContext.Current.Response.Output); } } My only experience with this error is in context of a web farm, which is not the case. Also, I understand that the machine key is used for decrypting the ViewState. Any information on how to diagnose this issue would be appreciated. A Work-around: So far the work-around is to move the header content to a PartialView, then use an AJAX call to call a page with just the Partial View from the WebForms, and then using the PartialView directly on the MVC Views. Also, we are still able to share non-tech-specific parts of the Master Page, i.e. anything that is not MVC specific. Still yet, this is not an ideal solution, a server-side solution is still desired. Also, this solutino has issues when working with controls that have more sophisticated controls, using JavaScript, particularly dynamically generated script as used by 3rd party controls.

    Read the article

  • Why always fires OnFailure when return View() to Ajax Form ?

    - by Wahid Bitar
    I'm trying to make a log-in log-off with Ajax supported. I made some logic in my controller to sign the user in and then return simple partial containing welcome message and log-Off ActionLink my Action method looks like this : public ActionResult LogOn(LogOnModel model, string returnUrl) { if (ModelState.IsValid) { if (MembershipService.ValidateUser(model.UserName, model.Password)) { FormsService.SignIn(model.UserName, model.RememberMe); if (Request.IsAjaxRequest()) { //HERE IS THE PROBLEM :( return View("LogedInForm"); } else { if (!String.IsNullOrEmpty(returnUrl)) return Redirect(returnUrl); else return RedirectToAction("Index", "Home"); } } else { ModelState.AddModelError("", "The user name or password provided is incorrect."); if (Request.IsAjaxRequest()) { return Content("There were an error !"); } } } return View(model); } and I'm trying to return this simple partial : Welcome <b><%= Html.Encode(Model.UserName)%></b>! <%= Html.ActionLink("Log Off", "LogOff", "Account") %> and of-course the two partial are strongly-typed to LogOnModel. But if i returned View("PartialName") i always get OnFailure with status code 500. While if i returned Content("My Message") everything is going right. so please tell me why i always get this "StatusCode = 500" ??. where is the big mistake ??. By the way in my Site MasterPage i rendered partial to show long-on simple form this partial looks like this : <script type="text/javascript"> function ShowErrorMessage(ajaxContext) { var response = ajaxContext.get_response(); var statusCode = response.get_statusCode(); alert("Sorry, the request failed with status code " + statusCode); } function ShowSuccessMessage() { alert("Hey everything is OK!"); } </script> <div id="logedInDiv"> </div> <% using (Ajax.BeginForm("LogOn", "Account", new AjaxOptions { UpdateTargetId = "logedInDiv", InsertionMode = InsertionMode.Replace, OnSuccess = "ShowSuccessMessage", OnFailure = "ShowErrorMessage" })) { %> <%= Html.TextBoxFor(m => m.UserName)%> <%= Html.PasswordFor(m => m.Password)%> <%= Html.CheckBoxFor(m => m.RememberMe)%> <input type="submit" value="Log On" /> < <% } %>

    Read the article

  • Cast IEnumerable<Inherited> To IEnumerable<Base>

    - by david2342
    I'm trying to cast an IEnumerable of an inherited type to IEnumerable of base class. Have tried following: var test = resultFromDb.Cast<BookedResource>(); return test.ToList(); But getting error: You cannot convert these types. Linq to Entities only supports conversion primitive EDM-types. The classes involved look like this: public partial class HistoryBookedResource : BookedResource { } public partial class HistoryBookedResource { public int ResourceId { get; set; } public string DateFrom { get; set; } public string TimeFrom { get; set; } public string TimeTo { get; set; } } public partial class BookedResource { public int ResourceId { get; set; } public string DateFrom { get; set; } public string TimeFrom { get; set; } public string TimeTo { get; set; } } [MetadataType(typeof(BookedResourceMetaData))] public partial class BookedResource { } public class BookedResourceMetaData { [Required(ErrorMessage = "Resource id is Required")] [Range(0, int.MaxValue, ErrorMessage = "Resource id is must be an number")] public object ResourceId { get; set; } [Required(ErrorMessage = "Date is Required")] public object DateFrom { get; set; } [Required(ErrorMessage = "Time From is Required")] public object TimeFrom { get; set; } [Required(ErrorMessage = "Time to is Required")] public object TimeTo { get; set; } } The problem I'm trying to solve is to get records from table HistoryBookedResource and have the result in an IEnumerable<BookedResource> using Entity Framework and LINQ. UPDATE: When using the following the cast seams to work but when trying to loop with a foreach the data is lost. resultFromDb.ToList() as IEnumerable<BookedResource>; UPDATE 2: Im using entity frameworks generated model, model (edmx) is created from database, edmx include classes that reprecent the database tables. In database i have a history table for old BookedResource and it can happen that the user want to look at these and to get the old data from the database entity framework uses classes with the same name as the tables to receive data from db. So i receive the data from table HistoryBookedResource in HistoryBookedResource class. Because entity framework generate the partial classes with the properties i dont know if i can make them virtual and override. Any suggestions will be greatly appreciated.

    Read the article

  • Unusual RJS error

    - by rrb
    Hi, I am getting the following error in my RoR application: RJS error: TypeError: element is null Element.update("notice", "Comment Posted"); Element.update("allcomments", "\n\n\n \n\n waht now?\n\n \n\n \n\n \n\n asdfasdfa\n \n\n \n\n asdfasdf\n \n\n\n\n\n"); But when I hit the refresh button, I can see my partial updated. Here's my code: show_comments View: <table> <% comments.each do |my_comment| %> <tr> <td><%=h my_comment.comment%></td> </tr> <% end %> </table> show View: <div class="wrapper"> <div class="rescale"> <div class="img-main"> <%= image_tag @deal.photo.url %> </div> </div> <div class="description"> <p class ="description_content"> <%=h @deal.description %> </p> </div> </div> <p> <b>Category:</b> <%=h @deal.category %> </p> <p> <b>Base price:</b> <%=h @deal.base_price %> </p> <%#*<p>%> <%#*<b>Discount:</b>%> <%#=h @deal.discount %> <%#*</p>%> <%= link_to 'Edit', edit_deal_path(@deal) %> | <%= link_to 'Back', deals_path %> <p> <%= render :partial=>'deal_comments', :locals=>{ :comments=>Comment.new(:deal_id=>@deal.id)} %> </p> <div id="allcomments"> <%= render :partial=>'show_comments', :locals=>{ :comments=>Comment.find(@deal.comments)} %> </div> Controller: def create @comment = Comment.new(params[:comment]) render :update do |page| if @comment.save page.replace_html 'notice', 'Comment Posted' else page.replace_html 'notice', 'Something went wrong' end page.replace_html 'allcomments', :partial=> 'deals/show_comments', :locals=>{:comments=> @comment.deal.comments} end end def show_comments @deal = Deal.find(params[:deal_id]) render :partial=> "deals/show_comments", :locals=>{:comments=>@deal.comments} end end

    Read the article

  • Ajax page.replace_html problems with partials in Rails

    - by Chris Power
    Hello, I am having a problem with a pretty simple AJAX call in rails. I have a blog-style application and each post has a "like" feature. I want to be able to increment the "like" on each post in the index using AJAX onclick. I got it to work; however, the DOM is a bit tricky here, because no matter what partial its looking at, it will only update the TOP partial. so if I click "like" on post #2, it will update and replace the "likes" on post #1 instead. Code for _post partial: <some code here...> <div id="postcontent"> Posted <%= post.created_at.strftime("%A, %b %d")%> <br /> </div> <div id="postlikes"> <%= link_to_remote 'Like', :url => {:controller => 'posts', :action => 'like_post', :id => post.id}%> <%= post.like %> </div> code for _postlikes partial: <div id="postlikes"> <%= link_to_remote 'Like', :url => {:controller => 'posts', :action => 'like_post', :id => @post.id}%> <%= @post.like %> </div> </div> like_post.rjs code: page.replace_html "postlikes", :partial => "postlikes", :object => @post page.visual_effect :highlight, "postlikes", :duration => 3 So this all works properly for the first "postcontent" div. But this is an index of posts, so if I wanted to updated the second "postcontent" div on the page, it will still replace the html of the first. I understand the problem, I just don't know how to fix it :) Thanks in advance!

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Making Money from your SQL Server Blog

    - by Bill Graziano
    My SQL Server blog reading list is around one hundred blogs.  Many people are writing great content and generating lots of page views.  I see some of them running Google AdSense and trying to make a little money off their traffic.  If you want to earn some some extra money from what you’ve written there are a couple of options.  And one new option that I’m announcing here. Background Internet advertising is sold based on a few different pricing schemes.  Flat Fee.  You offer either all your impressions (page views) or some percentage of your impressions in exchange for a flat monthly fee.  CPM or cost per thousand impressions.  If the quoted price is $2 CPM you’ll get $2 for every 1,000 times the ad is displayed.  While you might think the “M” means millions, the “M” in CPM is the roman numeral for 1,000. CPC or cost per click.  This is also called PPC or pay per click.  In this method you get paid based on how many clicks there are on the ad.  CPA or cost per action.  In this method you get paid based on an action that occurs on the advertisers site after they click on the ad.  This is typically some type of sign up form.  This is how most affiliate programs work. Darren Rowse at ProBlogger has been writing about blogging and making money off blogs for years.  He has a good introduction to making money on your blog in his “Making Money” section.  If you’re interested in learning more he has a post up titled How to Make More Money From Your Blog in the New Year that links to many of his best posts on the subject. Google AdSense This is the most common method for people earning money from their blogging.  It’s easy to setup and administer.  You tell AdSense what size ads you’d like to run and it gives you a little piece of JavaScript to put on your site.  AdSense quickly learns the topics you write about and displays ads that are appropriate for your site.  I typically see ads for hosting, SQL Server tools and developer tools running in AdSense slots.  AdSense pays on a CPC model.  If you translate that back to CPM pricing you’ll see rates from $0.50 to $1.00 CPM. Amazon While you might not make much money writing books it’s now possible to make even less helping Amazon sell them.  You can sign up for an Amazon affiliate program.  Each time you send Amazon a link and someone buys the book you get a cut of that sale.  This is the CPA model from above.  Amazon can help you build some pretty nice “stores”.  Here’s the SQL Server bookstore I built for SQLTeam.com.  If you’re just putting in a page with books like I’ve done on SQLTeam you should keep your expectations low.  If you’re writing book reviews of suggesting books on your blog it really does make sense to setup an Amazon affiliate link.  People are much more likely to buy a book based on a review from a trusted source.  I always try to buy through a referral link if there is one. Amazon pays about 4% of the price as a referral fee.  You also get credit for anything else they buy while on the site.  I recently had someone buy an iPod nano with their SQL Server book making me an extra $5.60 richer!  Estimating how much you can make is difficult though.  How much attention you draw to the links and book reviews can dramatically affect the earnings. Private Ad Sales This is the hardest but potentially most lucrative option.  You sell advertising directly to companies that want to sell things to your readers.  Typically this would be SQL Server tool vendors, hosting companies or anyone else that wants to make money off database administrators.  This is also the most difficult to do.  You’ll need the contacts at the companies and enough page views to make it worth their while.  You’ll also need software to track the page views and clicks, geo-target your ads and smooth out the impressions.  Your earnings are based on whatever you can negotiate with the companies. SQL Server Ad Network For the last couple of years I’ve run any extra ads that I sold on the SQLTeam Weblogs.  You can see an example of that on Mladen’s blog.  The ad in the upper right corner is one that I’m running for him.  (Note: Many of the ads I’m running are geo-targeted to only appear in English speaking countries.  You may see a different set of ads outside the US, Canada and the UK.  You can also see he has a couple of Google ads on his blog.)  When I run ads on his blog I split the advertising revenue with him.  They make a little and I make a little. I recently started to expand this and sell advertising specifically to run on SQL Server-related blogs.  I’m also starting to run ads on non-SQLTeam blogs.  The only way I can sell more advertising is to have more blogs to run it on.  And that’s where you come in. I’ve created a SQL Server advertising network.  I handle all the ad sales and provide the technology to serve the ads.  I handle collections and payments back to you.  You get paid at the end of each month regardless of when (or if) the advertiser actually pays.  All you need to do is add a small piece of JavaScript to your site to display the ads. If you’re writing about SQL Server and interested in earning a little money for your site I’d like to talk to you.  You can use the Contact Us page on SQLTeam.com to reach me.  Running advertising on your blog isn’t for everyone.  If you’re concerned about what advertisers might think about certain posts then you might not be a good fit.  For the most part this isn’t an issue.  You’ll also need to have a PayPal account to receive payments.  You probably won’t get rich doing this.  But you can earn extra cash on the side for doing what you would do anyway.  I do know that people have earned enough to buy themselves a nice laptop doing this. My initial target is blogs with more than 10,000 page views per month.  I expect to pay two to three times what Google pays.  If you have less than 10,000 page views per month but are still interested I’d still like to hear from you.  I may not be able to sign up smaller blogs right away but we’ll get the process started.  If you’re unsure about your traffic Google Analytics is a free tool that provides great reporting on traffic, popular posts and how people find your blog.  If you have any questions or are just curious drop me a line and I’ll try to answer your questions.

    Read the article

  • Routing to a Controller with no View in Angular

    - by Rick Strahl
    Angular provides a nice routing, and controller to view model that makes it easy to create sophisticated JavaScript views fairly easily. But Angular's views are destroyed and re-rendered each time they are activated - what if you need to work with a persisted view that's too expensive to re-render? Here's how to build a headless controller that doesn't render a view through Angular, but rather manages the the view or markup manually.

    Read the article

  • Display a JSON-string as a table

    - by Martin Aleksander
    I'm totally new to JSON, and have a json-string I need to display as a user-friendly table. I have this file, http://ish.tek.no/json_top_content.php?project_id=11&period=week, witch is showing ID-numbers for products (title) and the number of views. The Title-ID should be connected to this file; http://api.prisguide.no/export/product.php?id=158200 so I can get a table like this: ID | Product Name | Views 158200 | Samsung Galaxy SIII | 21049 How can I do this?

    Read the article

  • Android Photosphere Viewer Application

    - by Frisbetarian
    I'm aiming to develop an android app that simply views photosphere files. It should utilize the compass to pan around the sphere when the user moves his/her phone. I'm not aiming to create photospheres via the phone's camera with it, just upload the necessary jpegs and view them. Could anyone offer any advice as to how I could go about with implementing an API that views and gives the option to manipulate photosphere files?

    Read the article

  • Need Help in Pointing to focus on the Key elements in Code Review Phase?

    - by Sankar Ganesh
    Hi Friends, Let us share your views on the Code Review process, If someone gave a code snippet and ask you to review that code, then what are the major things you will focus on that code Review process. For Instance: I will check any dead code is available in that code, other than Checking Dead Code, what are the key elements to be focused on CODE REVIEW PROCESS. Thanks For Sharing Your Views Sankar Ganesh.S

    Read the article

  • The most dangerous SQL Script in the world!

    - by DrJohn
    In my last blog entry, I outlined how to automate SQL Server database builds from concatenated SQL Scripts. However, I did not mention how I ensure the database is clean before I rebuild it. Clearly a simple DROP/CREATE DATABASE command would suffice; but you may not have permission to execute such commands, especially in a corporate environment controlled by a centralised DBA team. However, you should at least have database owner permissions on the development database so you can actually do your job! Then you can employ my universal "drop all" script which will clear down your database before you run your SQL Scripts to rebuild all the database objects. Why start with a clean database? During the development process, it is all too easy to leave old objects hanging around in the database which can have unforeseen consequences. For example, when you rename a table you may forget to delete the old table and change all the related views to use the new table. Clearly this will mean an end-user querying the views will get the wrong data and your reputation will take a nose dive as a result! Starting with a clean, empty database and then building all your database objects using SQL Scripts using the technique outlined in my previous blog means you know exactly what you have in your database. The database can then be repopulated using SSIS and bingo; you have a data mart "to go". My universal "drop all" SQL Script To ensure you start with a clean database run my universal "drop all" script which you can download from here: 100_drop_all.zip By using the database catalog views, the script finds and drops all of the following database objects: Foreign key relationships Stored procedures Triggers Database triggers Views Tables Functions Partition schemes Partition functions XML Schema Collections Schemas Types Service broker services Service broker queues Service broker contracts Service broker message types SQLCLR assemblies There are two optional sections to the script: drop users and drop roles. You may use these at your peril, particularly as you may well remove your own permissions! Note that the script has a verbose mode which displays the SQL commands it is executing. This can be switched on by setting @debug=1. Running this script against one of the system databases is certainly not recommended! So I advise you to keep a USE database statement at the top of the file. Good luck and be careful!!

    Read the article

  • Questions about identifying the components in MVC

    - by luiscubal
    I'm currently developing an client-server application in node.js, Express, mustache and MySQL. However, I believe this question should be mostly language and framework agnostic. This is the first time I'm doing a real MVC application and I'm having trouble deciding exactly what means each component. (I've done web applications that could perhaps be called MVC before, but I wouldn't confidently refer to them as such) I have a server.js that ties the whole application together. It does initialization of all other components (including the database connection, and what I think are the "models" and the "views"), receiving HTTP requests and deciding which "views" to use. Does this mean that my server.js file is the controller? Or am I mixing code that doesn't belong there? What components should I break the server.js file into? Some examples of code that's in the server.js file: var connection = mysql.createConnection({ host : 'localhost', user : 'root', password : 'sqlrevenge', database : 'blog' }); //... app.get("/login", function (req, res) { //Function handles a GET request for login forms if (process.env.NODE_ENV == 'DEVELOPMENT') { mu.clearCache(); } session.session_from_request(connection, req, function (err, session) { if (err) { console.log('index.js session error', err); session = null; } login_view.html(res, user_model, post_model, session, mu); //I named my view functions "html" for the case I might want to add other output types (such as a JSON API), or should I opt for completely separate views then? }); }); I have another file that belongs named session.js. It receives a cookies object, reads the stored data to decide if it's a valid user session or not. It also includes a function named login that does change the value of cookies. First, I thought it would be part of the controller, since it kind of dealt with user input and supplied data to the models. Then, I thought that maybe it was a model since it dealt with the application data/database and the data it supplies is used by views. Now, I'm even wondering if it could be considered a View, since it outputs data (cookies are part of HTTP headers, which are output)

    Read the article

  • How to wrap console utils in webserver

    - by Alex Brown
    I have a big dataset (100Mbs/day) and a bunch of console a TCL/TK tools to view it - I want to turn it into a web app that I can build, and others can maintain. In long: my group runs simulations yielding 100s of Mbs of data daily, in multiple (mostly but not only) text forms. We have a bunch of scripts and tools, mostly old school 1990's style stuff requiring a 5-button mouse, as well as lots of ad-hoc scripts that engineers build out of frustration every month or so. These produces UIs, graphs, spreadsheets (various sizes), logs, event histories etc. I want to replace (or at least supplement) the xwindows / console style UI with a web-based one, so I need the following properties: pleasant to program can wrap existing command-line tools in separate views (I don't need to scrape GUIs or anything) as I port logic from the existing scripts I can create a modularised and pleasant codebase to replace it I can attach a web-ui to navigate between views - each view is likely to contain keys which might make sense to view in another I am new to building systems that have logic on the back-end and front-end of a web-server. from that point of view, they do this: backend wraps old-school executables, constructs calls into them and them takes the output and wraps it up, niceifies it and delivers it to the web client. For instance the tool might generate a number of indexed images (per invocation) which I might deliver all at once or on-demand. May (probably) need to to heavy stats on some sources. frontend provides navigation connecting multiple views, performs requests from one view for data from another (or self to self), etc. Probably will have some views with a lot of interactivity. Can people please point me towards viable solutions for this? I know it's a bit of an open question so as answers come in I hope to refine the spec until we have a good match. I guess I expect to see answers like "RoR!" "beans!" "Scala!" but please give an indication of why those are a good fit; I know nothing! I got bumped off SO for asking an open-ended question, so sorry if its OT here too (let me know). I take the policy that I use the best/closest matched language for a project but most of my team are extremely low level (ie pipeline stages and CDyn) so I don't have the peer group to know where to start.

    Read the article

  • Who should control navigation in an MVVM application?

    - by SonOfPirate
    Example #1: I have a view displayed in my MVVM application (let's use Silverlight for the purposes of the discussion) and I click on a button that should take me to a new page. Example #2: That same view has another button that, when clicked, should open up a details view in a child window (dialog). We know that there will be Command objects exposed by our ViewModel bound to the buttons with methods that respond to the user's click. But, what then? How do we complete the action? Even if we use a so-called NavigationService, what are we telling it? To be more specific, in a traditional View-first model (like URL-based navigation schemes such as on the web or the SL built-in navigation framework) the Command objects would have to know what View to display next. That seems to cross the line when it comes to the separation of concerns promoted by the pattern. On the other hand, if the button wasn't wired to a Command object and behaved like a hyperlink, the navigation rules could be defined in the markup. But do we want the Views to control application flow and isn't navigation just another type of business logic? (I can say yes in some cases and no in others.) To me, the utopian implementation of the MVVM pattern (and I've heard others profess this) would be to have the ViewModel wired in such a way that the application can run headless (i.e. no Views). This provides the most surface area for code-based testing and makes the Views a true skin on the application. And my ViewModel shouldn't care if it displayed in the main window, a floating panel or a child window, should it? According to this apprach, it is up to some other mechanism at runtime to 'bind' what View should be displayed for each ViewModel. But what if we want to share a View with multiple ViewModels or vice versa? So given the need to manage the View-ViewModel relationship so we know what to display when along with the need to navigate between views, including displaying child windows / dialogs, how do we truly accomplish this in the MVVM pattern?

    Read the article

  • Design pattern to handle queries using multiple models

    - by coderkane
    I am presented with a dilemma while trying to re-designing the class structure for my PHP/MySQL application to make it more elegant and conform it to the SOLID principle. The problem goes like this: Let as assume, there is an abstract class called person which has certain properties to define a generic person, such as name, age, date of birth etc. There are two classes, student, and teacher, that implements this abstract class. They add their own unique properties to it. I have designed all the three classes to include all the operational logic (details of which are not relevant in context of the question). Now, I need to create views/reports/data grids which contain details from multiple classes, for example, say, a list of all students doing projects in Chemistry mentored by a teacher whose name is the parameter to the query. This is just one example of a view, there are many different views in the application, which uses data from 3-4 tables, and each of them have multiple input parameters to generate them. Considering this particular example, I have written the relevant query using JOIN and the results are as expected and proper, now here is the dilemma: Keeping in mind the single responsibility principle, where should I keep this query? It does not belong to either Student class, or Teacher class or any other classes currently present. a) Should I create a new class, say dataView class, and design it as a MVC pattern and keep the query there? What about the other views? how do they fit in this architecture? b) Should I not keep the query in code at all, and make it DB View ? c) Am I completely wrong in the approach? If so what is the right approach? My considerations are as follows: a) should be easy to add new views later on if requirement comes, without having to copy-paste-modify code b) would like to make it as loosely coupled as possible so that if minor db structure changes happen, it does not break I did google searches on report design and OOP report generators, but all the result seem to focus on the visual design of the report rather than fetching the data. I have already taken care of the visual aspect of the report using MVC with html templates. I am sure this is a very fundamental problem with known solution, but I am somehow not able to find it (maybe searching with wrong keyword). Edit1: Modified the title to make it more relevant Edit2: The accepted answer got me thinking in the right direction and identify my design flaws, which eventually led me to find this question and the solution in Stack Overflow which gave me the detailed answer to clear the confusion.

    Read the article

  • SQL Server 2008 System Functions to Monitor the Instance, Database, Files, etc.

    SQL Server provides several system meta data functions which allow users to obtain property values of different SQL Server objects and securables. Although you can also use the SQL Server catalog views or Dynamic Management Views to obtain much of this information, in some circumstances the system meta data functions simplify the process. In this tip I am going to demonstrate some of the available system meta data functions and their usage in different scenarios.

    Read the article

  • SQL View: Beyond the Basics

    Joe Celko delves into the main uses of views, explains how the WITH CHECK OPTION works, and demonstrates how the INSTEAD OF trigger can be used in those cases where views cannot be updatable. What are your servers really trying to tell you? Find out with new SQL Monitor 3.0, an easy-to-use tool built for no-nonsense database professionals.For effortless insights into SQL Server, download a free trial today.

    Read the article

  • Using T4 to generate Configuration classes

    - by Justin Hoffman
    I wanted to try to use T4 to read a web.config and generate all of the appSettings and connectionStrings as properties of a class.  I elected in this template only to output appSettings and connectionStrings but you can see it would be easily adapted for app specific settings, bindings etc.  This allows for quick access to config values as well as removing the potential for typo's when accessing values from the ConfigurationManager. One caveat: a developer would need to remember to run the .tt file after adding an entry to the web.config.  However, one would quickly notice when trying to access the property from the generated class (it wouldn't be there).  Additionally, there are other options as noted here. The first step was to create the .tt file.  Note that this is a basic example, it could be extended even further I'm sure.  In this example I just manually input the path to the web.config file. <#@ template debug="false" hostspecific="true" language="C#" #><#@ output extension=".cs" #><#@ assembly Name="System.Configuration" #><#@ assembly name="System.Xml" #><#@ assembly name="System.Xml.Linq" #><#@ assembly name="System.Net" #><#@ assembly name="System" #><#@ import namespace="System.Configuration" #><#@ import namespace="System.Xml" #><#@ import namespace="System.Net" #><#@ import namespace="Microsoft.VisualStudio.TextTemplating" #><#@ import namespace="System.Xml.Linq" #>using System;using System.Configuration;using System.Xml;using System.Xml.Linq;using System.Linq;namespace MyProject.Web { public partial class Configurator { <# var xDocument = XDocument.Load(@"G:\MySolution\MyProject\Web.config"); var results = xDocument.Descendants("appSettings"); const string key = "key"; const string name = "name"; foreach (var xElement in results.Descendants()) {#> public string <#= xElement.Attribute(key).Value#>{get {return ConfigurationManager.AppSettings[<#= string.Format("{0}{1}{2}","\"" , xElement.Attribute(key).Value, "\"")#>];}} <#}#> <# var connectionStrings = xDocument.Descendants("connectionStrings"); foreach(var connString in connectionStrings.Descendants()) {#> public string <#= connString.Attribute(name).Value#>{get {return ConfigurationManager.ConnectionStrings[<#= string.Format("{0}{1}{2}","\"" , connString.Attribute(name).Value, "\"")#>].ConnectionString;}} <#} #> }} The resulting .cs file: using System;using System.Configuration;using System.Xml;using System.Xml.Linq;using System.Linq;namespace MyProject.Web { public partial class Configurator { public string ClientValidationEnabled{get {return ConfigurationManager.AppSettings["ClientValidationEnabled"];}} public string UnobtrusiveJavaScriptEnabled{get {return ConfigurationManager.AppSettings["UnobtrusiveJavaScriptEnabled"];}} public string ServiceUri{get {return ConfigurationManager.AppSettings["ServiceUri"];}} public string TestConnection{get {return ConfigurationManager.ConnectionStrings["TestConnection"].ConnectionString;}} public string SecondTestConnection{get {return ConfigurationManager.ConnectionStrings["SecondTestConnection"].ConnectionString;}} }} Next, I extended the partial class for easy access to the Configuration. However, you could just use the generated class file itself. using System;using System.Linq;using System.Xml.Linq;namespace MyProject.Web{ public partial class Configurator { private static readonly Configurator Instance = new Configurator(); public static Configurator For { get { return Instance; } } }} Finally, in my example, I used the Configurator class like so: [TestMethod] public void Test_Web_Config() { var result = Configurator.For.ServiceUri; Assert.AreEqual(result, "http://localhost:30237/Service1/"); }

    Read the article

  • Confused as to which Prototype helper to use

    - by user284194
    After reading http://api.rubyonrails.org/classes/ActionView/Helpers/PrototypeHelper.html I just can't seem to find what I'm looking for. I have a simplistic model that deletes the oldest message after the list of messages reaches 24, the model is this simple: class Message < ActiveRecord::Base after_create :destroy_old_messages protected def destroy_old_messages messages = Message.all(:order => 'updated_at DESC') messages[24..-1].each {|p| p.destroy } if messages.size >= 24 end end There is a message form below the list of messages which is used to add new messages. I'm using Prototype/RJS to add new messages to the top of the list. create.rjs: page.insert_html :top, :messages, :partial => @message page[@message].visual_effect :grow #page[dom_id(@messages)].replace :partial => @message page[:message_form].reset My index.html.erb is very simple: <div id="messages"> <%= render :partial => @messages %> </div> <%= render :partial => "message_form" %> When new messages are added they appear just fine, but when the 24 message limit has been reached it just keeps adding messages and doesn't remove the old ones. Ideally I'd like them to fade out as the new ones are added, but they can just disappear. The commented line in create.rjs actually works, it removes the expired message but I lose the visual effect when adding a new message. Does anyone have a suggestion on how to accomplish adding and removing messages from this simple list with effects for both? Help would be greatly appreciated. Thanks for reading. P.S.: would periodically_call_remote be helpful in this situation?

    Read the article

  • ASP.NET MVC - PartialView not refreshing

    - by Billy Logan
    Hello Everyone, I have a view that uses a javascript callback to reload a partial view. For whatever reason the contents of the partial class do not refresh even though i can step through the entire process and see the page being recalled and populated. Any reason why the page would not display? Code is as follows: <div id="big_image_content"> <% Html.RenderPartial("ZoomImage", Model); %> </div> This link should reload the div above: <a href="javascript:void(0)" onclick="$('#big_image_content').load('/ShopDetai/ZoomImage);" title="<%= shape.Shape %>" alt="<%= shape.Shape %>"> <img src="http://images.rugs-direct.com/<%= shape.Image.ToLower() %>" width="40" alt="<%= shape.Shape %>"> </a> partial view(ZoomImage.ascx) simplified for now, but still doesn't load: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<RugsDirect.Data.ItemDetailsModel>" %> <%= Model.Category.ToLower()%> And finally the controller side of things: public ActionResult ZoomImage() { try { ItemDetailsModel model = GetMainImageContentModel(); return PartialView("ZoomImage", model); } catch (Exception ex) { //send the error email ExceptionPolicy.HandleException(ex, "Exception Policy"); //redirect to the error page return RedirectToAction("ViewError", "Shop"); } } Again, i can step through this entire process and all seems to be working accept for the page not reloading. I can even break on the <%= Model.Category.ToLower()% of the partial view, but it will not be displayed. Thanks in advance, Billy

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >