Search Results

Search found 2947 results on 118 pages for 'partial specialization'.

Page 28/118 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Using T4 to generate Configuration classes

    - by Justin Hoffman
    I wanted to try to use T4 to read a web.config and generate all of the appSettings and connectionStrings as properties of a class.  I elected in this template only to output appSettings and connectionStrings but you can see it would be easily adapted for app specific settings, bindings etc.  This allows for quick access to config values as well as removing the potential for typo's when accessing values from the ConfigurationManager. One caveat: a developer would need to remember to run the .tt file after adding an entry to the web.config.  However, one would quickly notice when trying to access the property from the generated class (it wouldn't be there).  Additionally, there are other options as noted here. The first step was to create the .tt file.  Note that this is a basic example, it could be extended even further I'm sure.  In this example I just manually input the path to the web.config file. <#@ template debug="false" hostspecific="true" language="C#" #><#@ output extension=".cs" #><#@ assembly Name="System.Configuration" #><#@ assembly name="System.Xml" #><#@ assembly name="System.Xml.Linq" #><#@ assembly name="System.Net" #><#@ assembly name="System" #><#@ import namespace="System.Configuration" #><#@ import namespace="System.Xml" #><#@ import namespace="System.Net" #><#@ import namespace="Microsoft.VisualStudio.TextTemplating" #><#@ import namespace="System.Xml.Linq" #>using System;using System.Configuration;using System.Xml;using System.Xml.Linq;using System.Linq;namespace MyProject.Web { public partial class Configurator { <# var xDocument = XDocument.Load(@"G:\MySolution\MyProject\Web.config"); var results = xDocument.Descendants("appSettings"); const string key = "key"; const string name = "name"; foreach (var xElement in results.Descendants()) {#> public string <#= xElement.Attribute(key).Value#>{get {return ConfigurationManager.AppSettings[<#= string.Format("{0}{1}{2}","\"" , xElement.Attribute(key).Value, "\"")#>];}} <#}#> <# var connectionStrings = xDocument.Descendants("connectionStrings"); foreach(var connString in connectionStrings.Descendants()) {#> public string <#= connString.Attribute(name).Value#>{get {return ConfigurationManager.ConnectionStrings[<#= string.Format("{0}{1}{2}","\"" , connString.Attribute(name).Value, "\"")#>].ConnectionString;}} <#} #> }} The resulting .cs file: using System;using System.Configuration;using System.Xml;using System.Xml.Linq;using System.Linq;namespace MyProject.Web { public partial class Configurator { public string ClientValidationEnabled{get {return ConfigurationManager.AppSettings["ClientValidationEnabled"];}} public string UnobtrusiveJavaScriptEnabled{get {return ConfigurationManager.AppSettings["UnobtrusiveJavaScriptEnabled"];}} public string ServiceUri{get {return ConfigurationManager.AppSettings["ServiceUri"];}} public string TestConnection{get {return ConfigurationManager.ConnectionStrings["TestConnection"].ConnectionString;}} public string SecondTestConnection{get {return ConfigurationManager.ConnectionStrings["SecondTestConnection"].ConnectionString;}} }} Next, I extended the partial class for easy access to the Configuration. However, you could just use the generated class file itself. using System;using System.Linq;using System.Xml.Linq;namespace MyProject.Web{ public partial class Configurator { private static readonly Configurator Instance = new Configurator(); public static Configurator For { get { return Instance; } } }} Finally, in my example, I used the Configurator class like so: [TestMethod] public void Test_Web_Config() { var result = Configurator.For.ServiceUri; Assert.AreEqual(result, "http://localhost:30237/Service1/"); }

    Read the article

  • Confused as to which Prototype helper to use

    - by user284194
    After reading http://api.rubyonrails.org/classes/ActionView/Helpers/PrototypeHelper.html I just can't seem to find what I'm looking for. I have a simplistic model that deletes the oldest message after the list of messages reaches 24, the model is this simple: class Message < ActiveRecord::Base after_create :destroy_old_messages protected def destroy_old_messages messages = Message.all(:order => 'updated_at DESC') messages[24..-1].each {|p| p.destroy } if messages.size >= 24 end end There is a message form below the list of messages which is used to add new messages. I'm using Prototype/RJS to add new messages to the top of the list. create.rjs: page.insert_html :top, :messages, :partial => @message page[@message].visual_effect :grow #page[dom_id(@messages)].replace :partial => @message page[:message_form].reset My index.html.erb is very simple: <div id="messages"> <%= render :partial => @messages %> </div> <%= render :partial => "message_form" %> When new messages are added they appear just fine, but when the 24 message limit has been reached it just keeps adding messages and doesn't remove the old ones. Ideally I'd like them to fade out as the new ones are added, but they can just disappear. The commented line in create.rjs actually works, it removes the expired message but I lose the visual effect when adding a new message. Does anyone have a suggestion on how to accomplish adding and removing messages from this simple list with effects for both? Help would be greatly appreciated. Thanks for reading. P.S.: would periodically_call_remote be helpful in this situation?

    Read the article

  • ASP.NET MVC - PartialView not refreshing

    - by Billy Logan
    Hello Everyone, I have a view that uses a javascript callback to reload a partial view. For whatever reason the contents of the partial class do not refresh even though i can step through the entire process and see the page being recalled and populated. Any reason why the page would not display? Code is as follows: <div id="big_image_content"> <% Html.RenderPartial("ZoomImage", Model); %> </div> This link should reload the div above: <a href="javascript:void(0)" onclick="$('#big_image_content').load('/ShopDetai/ZoomImage);" title="<%= shape.Shape %>" alt="<%= shape.Shape %>"> <img src="http://images.rugs-direct.com/<%= shape.Image.ToLower() %>" width="40" alt="<%= shape.Shape %>"> </a> partial view(ZoomImage.ascx) simplified for now, but still doesn't load: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<RugsDirect.Data.ItemDetailsModel>" %> <%= Model.Category.ToLower()%> And finally the controller side of things: public ActionResult ZoomImage() { try { ItemDetailsModel model = GetMainImageContentModel(); return PartialView("ZoomImage", model); } catch (Exception ex) { //send the error email ExceptionPolicy.HandleException(ex, "Exception Policy"); //redirect to the error page return RedirectToAction("ViewError", "Shop"); } } Again, i can step through this entire process and all seems to be working accept for the page not reloading. I can even break on the <%= Model.Category.ToLower()% of the partial view, but it will not be displayed. Thanks in advance, Billy

    Read the article

  • Passing an instance variable through RJS?

    - by Elliot
    Hey guys here is my code (roughly): books.html.erb <% @books.each do |book| %> <% @bookid = book.id %> <div id="enter_stuff"> <%= render "input", :bookid => @bookid %> </div> <%end%> _input.html.erb <% @book = Book.find_by_id(@bookid) %> <strong>your book is: <%=h @book.name %></strong> create.rjs page.replace_html :enter_stuff, :partial => 'input', :object => @bookid The problem here is that only create.js doesn't seem to work (though, if instead of passing the partial I passed "..." it does work, so I know its that there are instance variables in the partial that aren't being reset. Any ideas?) So the final question, is how do I pass an instance variable to a partial through the create.rjs file? p.s. I know I will have duplicate div IDs, I'm not worrying about that for now though. Best, Elliot

    Read the article

  • View Models (ViewData), UserControls/Partials and Global variables - best practice?

    - by elado
    Hi I'm trying to figure out a good way to have 'global' members (such as CurrentUser, Theme etc.) in all of my partials as well as in my views. I don't want to have a logic class that can return this data (like BL.CurrentUser) I do think it needs to be a part of the Model in my views So I tried inheriting from BaseViewData with these members. In my controllers, in this way or another (a filter or base method in my BaseController), I create an instance of the inheriting class and pass it as a view data. Everything's perfect till this point, cause then I have my view data available on the main View with the base members. But what about partials? If I have a simple partial that needs to display a blog post then it looks like this: <%@ Control Language="C#" AutoEventWireup="true" Inherits="ViewUserControl<Post>" %> and simple code to render this partial in my view (that its model.Posts is IEnumerable<Post>): <%foreach (Post p in this.Model.Posts) {%> <%Html.RenderPartial("Post",p); %> <%}%> Since the partial's Model isn't BaseViewData, I don't have access to those properties. Hence, I tried to make a class named PostViewData which inherits from BaseViewData, but then my containing views will have a code to actually create the PostViewData in them in order to pass it to the partial: <%Html.RenderPartial("Post",new PostViewData { Post=p,CurrentUser=Model.CurrentUser,... }); %> Or I could use a copy constructor <%Html.RenderPartial("Post",new PostViewData(Model) { Post=p }); %> I just wonder if there's any other way to implement this before I move on. Any suggestions? Thanks!

    Read the article

  • Validation L2S question

    - by user158020
    This may be a bit winded because I am new to wpf. I have created a partial class for an entity in my L2S class that is primarily used for validation. It implements the onchanging and onvalidate methods. I am trying to use the MVVM pattern, and in a window/view I have set the datacontext in the xaml: <Window.DataContext> <vm:StartViewModel /> </Window.DataContext> when a user leaves a required field in the view blank, the onchanging event of the partial class is fired when I close the form, not when I save the data. So, if a user leaves the textbox blank, the old value is retained and the onchaging method is fired, but I have no idea how to alert the user of the resulting error. here is my onchanging code in the partial class: partial void Ondocument_titleChanging(string value) { if (value.Length == 0) throw new Exception("Document title is required."); if (value.Length > 256) throw new Exception("Document title cannot be longer than 256 characters."); } throwing an exception doesn't notify the user of the error. it just allows the form to close and rejects the changes to the textbox. hope this makes sense... edit: this example was taken from Scott Guthries article here: http://aspalliance.com/1427_LINQ_to_SQL_Part_5__Binding_UI_using_the_ASPLinqDataSource_Control.5

    Read the article

  • RegisterClientScriptInclude doesn't work for some reason...

    - by Andrew
    Hey, I've spent at least 2 days trying anything and googling this...but for some reason I can't get RegisterClientScriptInclude to work the way everyone else has it working? First off, I am usting .NET 3.5 Ajax,...and I am including javascript in my partial page refreshes...using this code: ScriptManager.RegisterClientScriptBlock(this, typeof(Page), "MyClientCode", script, true); It works perfectly, my javascript code contained in the script variable is included every partial refresh. The javascript in script is actually quite extensive though, and I would like to store it in a .js file,..so logically I make a .js file and try to include it using RegisterClientScriptInclude ...however i can't for the life of my get this to work. here's the exact code: ScriptManager.RegisterClientScriptInclude(this, typeof(Page), "mytestscript", "/js/testscript.js"); the testscript.js file is only included in FULL page refreshes...ie. when I load the page, or do a full postback....i can't get the file to be included in partial refreshes...have no idea why..when viewing the ajax POST in firebug I don't see a difference whether I include the file or not.... both of the ScriptManager Includes are being ran from the exact same place in "Page_Load"...so they should execute every partial refresh (but only the ScriptBlock does). anyways,..any help or ideas,..or further ways I can trouble shoot this problem, would be appreciated. Thanks, Andrew

    Read the article

  • AngularJS: How to make angular load script inside ng-include?

    - by Ranjith R
    Hey I am building a web page with angular. The problem is that there are somethings already build without angular and I have to include them as well The problem is this. I have something like this in my main.html: <ngInclude src="partial.html"> </ngInclude> And my partial.html has something like this <h2> heading 1 <h2> <script type="text/javascript" src="static/js/partial.js"> </script> And my partial.js has nothing to do with angularjs. nginclude works and I can see the html, but I can not see the javascript file being loaded at all. I know how to use firebug/ chrome-dev-tool, but I can not even see the network request being made. What am I doing wrong? I knwo angular has some special meaning to script tag. Can I override it?

    Read the article

  • Rails syntax for comments in templates: is this bug understood?

    - by brahn
    Using rails 2.3.2 I have a partial _foo.rhtml that begins with a comment as follows: <% # here is a comment %> <li><%= foo %></li> When I render the partial from a view in the traditional way, e.g. <% some_numbers = [1, 2, 3, 4, 5] %> <ul> <%= render :partial => "foo", :collection => some_numbers %> </ul> I found that the <li> and </li> tags are ommitted in the output -- i.e. the resulting HTML is <ul> 1 2 3 4 5 </ul> However, I can solve this problem by fixing _foo.rhtml to eliminate the space between the <% and the # so that the partial now reads: <%# here is a comment %> <li><%= foo %></li> My question: what's going on here? E.g., is <% # comment %> simply incorrect syntax for including comments in a template? Or is the problem more subtle? Thanks!

    Read the article

  • Good Replacement for User Control?

    - by David Lively
    I found user controls to be incredibly useful when working with ASP.NET webforms. By encapsulating the code required for displaying a control with the markup, creation of reusable components was very straightforward and very, very useful. While MVC provides convenient separation of concerns, this seems to break encapsulation (ie, you can add a control without adding or using its supporting code, leading to runtime errors). Having to modify a controller every time I add a control to a view seems to me to integrate concerns, not separate them. I'd rather break the purist MVC ideology than give up the benefits of reusable, packaged controls. I need to be able to include components similar to webforms user controls throughout a site, but not for the entire site, and not at a level that belongs in a master page. These components should have their own code not just markup (to interact with the business layer), and it would be great if the page controller didn't need to know about the control. Since MVC user controls don't have codebehind, I can't see a good way to do this. I've searched previous SO questions, and have yet to find a good answer. Options so far In an attempt to avoid turning the comments section into a discussion... RenderAction This allows the view to call another controller, which will be responsible for interacting with the BLL and whatever data is necessary to its corresponding view. The calling view needs to be aware of the sub controller. This seems to provide a nice way to encapsulate partial views and controls, without having to modify the calling controller. RenderPartial The calling controller is still responsible for executing whatever code is associated with the partial view, and making sure that the model passed to the partial view contains the data it expects. Effectively, modifying the partial view potentially means modifying the calling controller. Annoying especially if this is used in multiple places. Portable Areas Place each control in its own project/area?

    Read the article

  • Django Save Incomplete Progress on Form

    - by jimbob
    I have a django webapp with multiple users logging in and fill in a form. Some users may start filling in a form and lack some required data (e.g., a grant #) needed to validate the form (and before we can start working on it). I want them to be able to fill out the form and have an option to save the partial info (so another day they can log back in and complete it) or submit the full info undergoing validation. Currently I'm using ModelForm for all the forms I use, and the Model has constraints to ensure valid data (e.g., the grant # has to be unique). However, I want them to be able to save this intermediary data without undergoing any validation. The solution I've thought of seems rather inelegant and un-django-ey: create a "Save Partial Form" button that saves the POST dictionary converts it to a shelf file and create a "SavedPartialForm" model connecting the user to partial forms saved in the shelf. Does this seem sensible? Is there a better way to save the POST dict directly into the db? Or is an add-on module that does this partial-save of a form (which seems to be a fairly common activity with webforms)? My biggest concern with my method is I want to eventually be able to do this form-autosave automatically (say every 10 minutes) in some ajax/jquery method without actually pressing a button and sending the POST request (e.g., so the user isn't redirected off the page when autosave is triggered). I'm not that familiar with jquery and am wondering if it would be possible to do this.

    Read the article

  • ASP.net MVC Linq-To-SQL Extended Class Field Binding

    - by user336858
    Hi there, The short version of this question is "Is there a way to get automatic View Object binding for fields defined in a partial class for a Linq-To-SQL generated class?" Apologies if it's been asked before. Example Suppose I have a typical MVC setup with the tables: Posts {PostID, ...} Categories {CategoryID, ...} A post can have more than one category, and a category can identify more than one post. Thus suppose further that I need an extra table: PostCategories {PostID, CategoryID, ...} This handles the many-to-many relationship between posts and categories. As far as I know, there's no way to do this in Linq-to-SQL right now so I have to shoehorn it in by adding a partial Postclass to the project to add that functionality. Something like: public partial class Post { public IEnumerable<Category> Categories{ get { ... } set { ... } } } So here's my question: If a user is accessing my MVC application front-end and begins editing a Post object, they might enter an invalid category. When the server recognizes the invalid input, the usual practice is to return the faulty object to the original view for re-editing along with some error messages. The fields in the edit page are re-populated with the provided values. However I don't know how to get this mechanism to work with the properties I created with the partial class as shown above. Any terminology, links, or tips you can provide would be tremendously helpful!

    Read the article

  • In Rails 3, how does one render HTML within a JSON response?

    - by ylg
    I'm porting an application from Merb 1.1 / 1.8.7 to Rails 3 (beta) / 1.9.1 that uses JSON responses containing HTML fragments, e.g., a JSON container specifying an update, on a user record, and the updated user row looks like . In Merb, since whatever a controller method returns is given to the client, one can put together a Hash, assign a rendered partial to one of the keys and return hash.to_json (though that certainly may not be the best way.) In Rails, it seems that to get data back to the client one must use render and render can only be called once, so rendering the hash to json won't work because of the partial render. From reading around, it seems one could put that data into a JSON .erb view file, with <%= render partial % in and render that. Is there a Rails-way of solving this problem (return JSON containing one or more HTML fragments) other than that? In Merb: only_provides :json ... self.status = 204 # or appropriate if not async return { 'action' => 'update', 'type' => 'user', 'id' => @user.id, 'html' => partial('user_row', format: :html, user: @user) }.to_json In Rails?

    Read the article

  • Using Teleriks new LINQ implementation to create OData feeds

    This week Telerik released a new LINQ implementation that is simple to use and produces domain models very fast. Built on top of the enterprise grade OpenAccess ORM, you can connect to any database that OpenAccess can connect to such as: SQL Server, MySQL, Oracle, SQL Azure, VistaDB, etc. While this is a separate LINQ implementation from traditional OpenAccess Entites, you can use the visual designer without ever interacting with OpenAccess, however, you can always hook into the advanced ORM features like caching, fetch plan optimization, etc, if needed. Just to show off how easy our LINQ implementation is to use, I will walk you through building an OData feed using Data Services Update for .NET Framework 3.5 SP1. (Memo to Microsoft: P-L-E-A-S-E hire someone from Apple to name your products.) How easy is it? If you have a fast machine, are skilled with the mouse, and type fast, you can do this in about 60 seconds via three easy steps. (I promise in about 2-3 weeks that you can do this in less then 30 seconds. Stay tuned for that.)  Step 1 (15-20 seconds): Building your Domain Model In your web project in Visual Studio, right click on the project and select Add|New Item and select Telerik OpenAccess Domain Model as your item template. Give the file a meaningful name as well. Select your database type (SQL Server, SQL Azure, Oracle, MySQL, VistaDB, etc) and build the connection string. If you already have a Visual Studio connection string already saved, this step is trivial.  Then select your tables, enter a name for your model and click Finish. In this case I connected to Northwind and selected only Customers, Orders, and Order Details.  I named my model NorthwindEntities and will use that in my DataService. Step 2 (20-25 seconds): Adding and Configuring your Data Service In your web project in Visual Studio, right click on the project and select Add|New Item and select ADO .NET Data Service as your item template and name your service. In the code behind for your Data Service you have to make three small changes. Add the name of your Telerik Domain Model (entered in Step 1) as the DataService name (shown on line 6 below as NorthwindEntities) and uncomment line 11 and add a * to show all entities. Optionally if you want to take advantage of the DataService 3.5 updates, add line 13 (and change IDataServiceConfiguration to DataServiceConfiguration in line 9.) 1: using System.Data.Services; 2: using System.Data.Services.Common; 3:   4: namespace Telerik.RLINQ.Astoria.Web 5: { 6: public class NorthwindService : DataService<NorthwindEntities> 7: { 8: //change the IDataServiceConfigurationto DataServiceConfiguration 9: public static void InitializeService(DataServiceConfiguration config) 10: { 11: config.SetEntitySetAccessRule("*", EntitySetRights.All); 12: //take advantage of the "Astoria3.5 Update" features 13: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 14: } 15: } 16: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Step 3 (~30 seconds): Adding the DataServiceKeys You now have to tell your data service what are the primary keys of each entity. To do this you have to create a new code file and create a few partial classes. If you type fast, use copy and paste from your first entity,  and use a refactoring productivity tool, you can add these 6-8 lines of code or so in about 30 seconds. This is the most tedious step, but dont worry, Ive bribed some of the developers and our next update will eliminate this step completely. Just create a partial class for each entity you have mapped and add the attribute [DataServiceKey] on top of it along with the keys field name. If you have any complex properties, you will need to make them a primitive type, as I do in line 15. Create this as a separate file, dont manipulate the generated data access classes in case you want to regenerate them again later (even thought that would be much faster.) 1: using System.Data.Services.Common; 2:   3: namespace Telerik.RLINQ.Astoria.Web 4: { 5: [DataServiceKey("CustomerID")] 6: public partial class Customer 7: { 8: } 9:   10: [DataServiceKey("OrderID")] 11: public partial class Order 12: { 13: } 14:   15: [DataServiceKey(new string[] { "OrderID", "ProductID" })] 16: public partial class OrderDetail 17: { 18: } 19:   20: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Done! Time to run the service. Now, lets run the service! Select the svc file and right click and say View in Browser. You will see your OData service and can interact with it in the browser. Now that you have an OData service set up, you can consume it in one of the many ways that OData is consumed: using LINQ, the Silverlight OData client, Excel PowerPivot, or PhP, etc. Happy Data Servicing! Technorati Tags: Telerik,Astoria,Data Services Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Issue 15: Introducing David Callaghan

    - by rituchhibber
        DAVID'S VIEW INTRODUCING DAVID CALLAGHAN David Callaghan Senior Vice President, Oracle EMEA Alliances and Channels David Callaghan is the Senior Vice President, Alliances & Channels, for Oracle EMEA. He is responsible for all elements of the Oracle Partner Network across the region and leads Oracle as it continues to deliver customer success through the alignment of Oracle's applications and hardware engineered to work together. As I reflect on our last quarter, I thank all our partners for your continued commitment and expertise in embracing the unique opportunity we have before us. The ability to engage with hardware, applications and technology is a real differentiator. We have been able to engage with deep specialization in individual products for some time, which has brought tremendous benefits. But now we can strengthen this further with the broad stack specialization that Oracle on Oracle brings. Now is the time to make that count. While customers are finishing spending this year's budget and planning their spend for the next calendar year, it is now that we need to build the quality opportunities and pipeline for the rest of the year. We have OpenWorld just around the corner with its compelling new product announcements and environment to engage customers at all levels. Make sure you use this event, and every opportunity it brings. In the next quarter you can expect to see targeted 'value creation' campaigns driven by Oracle, and I encourage you to exploit these where they will have greatest impact. My team will be engaging closely with their Oracle sales colleagues to help them leverage the tremendous value you bring, and to develop their ability to work effectively and independently with you, our partners. My team and I are all relentlessly committed to achieving partner, and customer, satisfaction to demonstrate the value of the Passion for Partnering that we all share. With best regards David Back to the welcome page

    Read the article

  • Rock Stars and now OPN All-Stars? Bring it.

    - by sandra.haan
    We are talking everything OPN All-Star - from home-court advantage to taking too many shots across a wide variety of industries, skill sets, focus areas, broad solution sets, applications and technologies. As a Platinum Partner, Intelenex levels of quality specialization range from ERP/EBS, CRM, AIA to Hyperion. Slam dunk! This is what gives Intelenex a well deserved star studded "baller" celebrity status like the LA Lakers very own Kobe Bryant. While Intelenex has been busy multi-specializing and taking names, Tyler Prince, group vp, North America Sales tells us a little bit about the value OPN's overall strategy brings to the table. This exclusive partnership allows OPN Specialized partners to provide customers with a solution that helps them adapt swiftly to new expansion conditions and changes. Namely, partners can pick an area to focus and can leverage that focus and competency to differentiate from the competition. You will be so HOT on the OPN court the Miami Heat will have nothing on you. Watch out, Lebron. Additionally, this specialization in products or set of products is recognized by the entire Oracle sales force, which is vital to all partners, but most importantly your end-customers. You will be so stylishly famous your cheerleader squad will not be able to steal the spotlight from you. Are you really All-Star worthy this season? Jump in and join Tyler's halftime report on OPN's All-Star program in this VAR Guy FastChat video to find out: Now that's what we call some March Madness - Good selling, The OPN Communications Team

    Read the article

  • Specialized &amp; Recognized by Oracle: Award season - make your submission for the OPN Specializati

    - by Jürgen Kress
      OPN Specialization Award Submit your nomination 2010 As an Oracle Partner in the process to become SOA & Application Grid Specialized and working on SOA and Application Grid opportunities please make sure that you submit your OPN Specialization Award Submit your nomination. Prices include free Oracle Open World tickets, marketing budgets for joint campaigns and joint press release. "These awards will recognize the high-level of innovation, excellence and commitment our partners bring to the table when they become Specialized with Oracle. We’re looking for partners with a proven track record in delivering winning, proven solutions that solve customers' most critical business challenges. Our Award winners will be partners that have demonstrated tangible success, growth in their Oracle business and outstanding Oracle solutions." Stein Surlien, SVP Oracle Alliances and Channels EMEA Nominations are open to partners based in EMEA from 1st March to 2nd July 2010. Be recognized! Submit your nominations today     Oracle Fusion Middleware Innovation Awards 2010 As an Oracle Customer and Partner make sure that you submit your Oracle Fusion Middleware Innovation Awards nomination Does your company use Oracle Fusion Middleware innovatively? Nominate your organization today for a chance to be recognized for your cutting-edge solution using any of the following Oracle Fusion Middleware products: Oracle Application Grid products Oracle SOA Suite Data Integration & Availability Oracle Identity Management Suite Oracle Fusion Middleware with Oracle Applications Enterprise 2.0 Prices include: FREE pass to Oracle OpenWorld 2010 in San Francisco for select winners in each category. Honored by Oracle executives at awards ceremony held during Oracle OpenWorld 2010 in San Francisco. Oracle Middleware Innovation Award Winner Plaque 1-3 meetings with Oracle Executives during Oracle OpenWorld 2010 Feature article placement in Oracle Magazine and placement in Oracle Press Release Customer snapshot and video testimonial opportunity, to be hosted on oracle.com Podcast interview opportunity with Senior Oracle Executive Submit your nomination to [email protected] on or before August 6th 2010 to win Oracle Fusion Middleware Innovation Awards 2010.

    Read the article

  • FY11 plans &ndash; how can you increase your SOA business?

    - by Jürgen Kress
    Thanks for a fantastic FY10 was great to work with all of you! Yes with the economic crises the fiscal year was hard. SOA and Oracle Fusion Middleware do address this challenges and can help companies to save cost to integrate their systems, automate and change their processes. More when we publish our fiscal year results. What is on the agenda for FY11? Specialization: It is key that you become SOA & Application Grid Specialized. We will focus our activities and budgets on partners with Specialization! Sales campaigns: To support you in our joint business we will continue to run joint sales campaigns. With OFM 11g there is a great opportunity to generate service revenue to migrate and to consolidate on the platform. It is key that you do register your opportunities within the Open Market Model (OMM) to ensure sales alignment. Enablement. With the release of many new products and versions training is key. We will continue to offer training dedicated to your role: sales, pre-sales and implementation. Make sure that you check local partner training calendars and sign up for the next bootcamps Thanks for your support! Jürgen Kress

    Read the article

  • BPI On Demand achieves both Oracle Fusion CRM Cloud Service 2013 Specialisation and Reseller status!

    - by Richard Lefebvre
    Oracle is delighted to share with you that BPI OnDemand has achieved the Oracle Fusion CRM Cloud Service 2013 Specialization and is the EMEA first ever Oracle Sales Cloud reseller ! One of Oracle's most active CRM SaaS partners across EMEA, BPI OnDemand operates out of the UK with subsidiaries in Spain and South Africa that will also benefit locally from the specialization and reseller status. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} BPI OnDemand distinguishes itself from other Oracle Sales Cloud integrators with 2 unique implementation options: 1) Rapid Advantage Fixed Scope for as low as £20,000 or their famous 2) Zero upfront cost Fully Managed Cloud CRM Service which has no equivalent in Europe. BPI OnDemand has already 2 Oracle Sales Cloud live customers and is engaging in many other opportunities including large corporate accounts. Meet BPI OnDemand here or on LinkedIn or on Twitter

    Read the article

  • Issue with Multiple ModalPopups, ValidationSummary and UpdatePanels

    - by Aaron Hoffman
    I am having an issue when a page contains multiple ModalPopups each containing a ValidationSummary Control. Here is the functionality I need: A user clicks a button and a Modal Popup appears with dynamic content based on the button that was clicked. (This functionality is working. Buttons are wrapped in UpdatePanels and the partial page postback calls .Show() on the ModalPopup) "Save" button in ModalPopup causes client side validation, then causes a full page postback so the entire ModalPopup disappears. (ModalPopup could disappear another way - the ModalPopup just needs to disappear after a successful save operation) If errors occur in the codebehind during Save operation, messages are added to the ValidationSummary (contained within the ModalPopup) and the ModalPopup is displayed again. When the ValidationSummary's are added to the PopupPanel's, the ModalPopups no longer display correctly after a full page postback caused by the "Save" button within the second PopupPanel. (the first panel continues to function correctly) Both PopupPanels are displayed, and neither is "Popped-Up", they are displayed in-line. Any ideas on how to solve this? Image of Error State (after "PostBack Popup2" button has been clicked) ASPX markup <asp:ScriptManager ID="ScriptManager1" runat="server"> </asp:ScriptManager> <%--********************************************************************* Popup1 *********************************************************************--%> <asp:UpdatePanel ID="Popup1ShowButtonUpdatePanel" runat="server"> <ContentTemplate> <%--This button will cause a partial page postback and pass a parameter to the Popup1ModalPopup in code behind and call its .Show() method to make it visible--%> <asp:Button ID="Popup1ShowButton" runat="server" Text="Show Popup1" OnClick="Popup1ShowButton_Click" CommandArgument="1" /> </ContentTemplate> </asp:UpdatePanel> <%--Hidden Control is used as ModalPopup's TargetControlID .Usually this is the ID of control that causes the Popup, but we want to control the modal popup from code behind --%> <asp:HiddenField ID="Popup1ModalPopupTargetControl" runat="server" /> <ajax:ModalPopupExtender ID="Popup1ModalPopup" runat="server" TargetControlID="Popup1ModalPopupTargetControl" PopupControlID="Popup1PopupControl" CancelControlID="Popup1CancelButton"> </ajax:ModalPopupExtender> <asp:Panel ID="Popup1PopupControl" runat="server" CssClass="ModalPopup" Style="width: 600px; background-color: #FFFFFF; border: solid 1px #000000;"> <%--This button causes validation and a full-page post back. Full page postback will causes the ModalPopup to be Hid. If there are errors in code behind, the code behind will add a message to the ValidationSummary, and make the ModalPopup visible again--%> <asp:Button ID="Popup1PostBackButton" runat="server" Text="PostBack Popup1" OnClick="Popup1PostBackButton_Click" />&nbsp; <asp:Button ID="Popup1CancelButton" runat="server" Text="Cancel Popup1" /> <asp:UpdatePanel ID="Popup1UpdatePanel" runat="server"> <ContentTemplate> <%--*************ISSUE HERE*************** The two ValidationSummary's are causing an issue. When the second ModalPopup's PostBack button is clicked, Both ModalPopup's become visible, but neither are "Popped-Up". If ValidationSummary's are removed, both ModalPopups Function Correctly--%> <asp:ValidationSummary ID="Popup1ValidationSummary" runat="server" /> <%--Will display dynamically passed paramter during partial page post-back--%> Popup1 Parameter:<asp:Literal ID="Popup1Parameter" runat="server"></asp:Literal><br /> </ContentTemplate> </asp:UpdatePanel> &nbsp;<br /> &nbsp;<br /> &nbsp;<br /> </asp:Panel> &nbsp;<br /> &nbsp;<br /> &nbsp;<br /> <%--********************************************************************* Popup2 *********************************************************************--%> <asp:UpdatePanel ID="Popup2ShowButtonUpdatePanel" runat="server"> <ContentTemplate> <%--This button will cause a partial page postback and pass a parameter to the Popup2ModalPopup in code behind and call its .Show() method to make it visible--%> <asp:Button ID="Popup2ShowButton" runat="server" Text="Show Popup2" OnClick="Popup2ShowButton_Click" CommandArgument="2" /> </ContentTemplate> </asp:UpdatePanel> <%--Hidden Control is used as ModalPopup's TargetControlID .Usually this is the ID of control that causes the Popup, but we want to control the modal popup from code behind --%> <asp:HiddenField ID="Popup2ModalPopupTargetControl" runat="server" /> <ajax:ModalPopupExtender ID="Popup2ModalPopup" runat="server" TargetControlID="Popup2ModalPopupTargetControl" PopupControlID="Popup2PopupControl" CancelControlID="Popup2CancelButton"> </ajax:ModalPopupExtender> <asp:Panel ID="Popup2PopupControl" runat="server" CssClass="ModalPopup" Style="width: 600px; background-color: #FFFFFF; border: solid 1px #000000;"> <%--This button causes validation and a full-page post back. Full page postback will causes the ModalPopup to be Hid. If there are errors in code behind, the code behind will add a message to the ValidationSummary, and make the ModalPopup visible again--%> <asp:Button ID="Popup2PostBackButton" runat="server" Text="PostBack Popup2" OnClick="Popup2PostBackButton_Click" />&nbsp; <asp:Button ID="Popup2CancelButton" runat="server" Text="Cancel Popup2" /> <asp:UpdatePanel ID="Popup2UpdatePanel" runat="server"> <ContentTemplate> <%--*************ISSUE HERE*************** The two ValidationSummary's are causing an issue. When the second ModalPopup's PostBack button is clicked, Both ModalPopup's become visible, but neither are "Popped-Up". If ValidationSummary's are removed, both ModalPopups Function Correctly--%> <asp:ValidationSummary ID="Popup2ValidationSummary" runat="server" /> <%--Will display dynamically passed paramter during partial page post-back--%> Popup2 Parameter:<asp:Literal ID="Popup2Parameter" runat="server"></asp:Literal><br /> </ContentTemplate> </asp:UpdatePanel> &nbsp;<br /> &nbsp;<br /> &nbsp;<br /> </asp:Panel> Code Behind protected void Popup1ShowButton_Click(object sender, EventArgs e) { Button btn = sender as Button; //Dynamically pass parameter to ModalPopup during partial page postback Popup1Parameter.Text = btn.CommandArgument; Popup1ModalPopup.Show(); } protected void Popup1PostBackButton_Click(object sender, EventArgs e) { //if there is an error, add a message to the validation summary and //show the ModalPopup again //TODO: add message to validation summary //show ModalPopup after page refresh (request/response) Popup1ModalPopup.Show(); } protected void Popup2ShowButton_Click(object sender, EventArgs e) { Button btn = sender as Button; //Dynamically pass parameter to ModalPopup during partial page postback Popup2Parameter.Text = btn.CommandArgument; Popup2ModalPopup.Show(); } protected void Popup2PostBackButton_Click(object sender, EventArgs e) { //***********After This is when the issue appears********************** //if there is an error, add a message to the validation summary and //show the ModalPopup again //TODO: add message to validation summary //show ModalPopup after page refresh (request/response) Popup2ModalPopup.Show(); }

    Read the article

  • ASP.NET MVC 3 Hosting :: How to Deploy Web Apps Using ASP.NET MVC 3, Razor and EF Code First - Part I

    - by mbridge
    First, you can download the source code from http://efmvc.codeplex.com. The following frameworks will be used for this step by step tutorial. public class Category {     public int CategoryId { get; set; }     [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }     public virtual ICollection<Expense> Expenses { get; set; } } Expense Class public class Expense {             public int ExpenseId { get; set; }            public string  Transaction { get; set; }     public DateTime Date { get; set; }     public double Amount { get; set; }     public int CategoryId { get; set; }     public virtual Category Category { get; set; } }    Define Domain Model Let’s create domain model for our simple web application Category Class We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. In this post, we will be focusing on CRUD operations for the entity Category and will be working on the Expense entity with a View Model object in the later post. And the source code for this application will be refactored over time. The above entities are very simple POCO (Plain Old CLR Object) classes and the entity Category is decorated with validation attributes in the System.ComponentModel.DataAnnotations namespace. Now we want to use these entities for defining model objects for the Entity Framework 4. Using the Code First approach of Entity Framework, we can first define the entities by simply writing POCO classes without any coupling with any API or database library. This approach lets you focus on domain model which will enable Domain-Driven Development for applications. EF code first support is currently enabled with a separate API that is runs on top of the Entity Framework 4. EF Code First is reached CTP 5 when I am writing this article. Creating Context Class for Entity Framework We have created our domain model and let’s create a class in order to working with Entity Framework Code First. For this, you have to download EF Code First CTP 5 and add reference to the assembly EntitFramework.dll. You can also use NuGet to download add reference to EEF Code First. public class MyFinanceContext : DbContext {     public MyFinanceContext() : base("MyFinance") { }     public DbSet<Category> Categories { get; set; }     public DbSet<Expense> Expenses { get; set; }         }   The above class MyFinanceContext is derived from DbContext that can connect your model classes to a database. The MyFinanceContext class is mapping our Category and Expense class into database tables Categories and Expenses using DbSet<TEntity> where TEntity is any POCO class. When we are running the application at first time, it will automatically create the database. EF code-first look for a connection string in web.config or app.config that has the same name as the dbcontext class. If it is not find any connection string with the convention, it will automatically create database in local SQL Express database by default and the name of the database will be same name as the dbcontext class. You can also define the name of database in constructor of the the dbcontext class. Unlike NHibernate, we don’t have to use any XML based mapping files or Fluent interface for mapping between our model and database. The model classes of Code First are working on the basis of conventions and we can also use a fluent API to refine our model. The convention for primary key is ‘Id’ or ‘<class name>Id’.  If primary key properties are detected with type ‘int’, ‘long’ or ‘short’, they will automatically registered as identity columns in the database by default. Primary key detection is not case sensitive. We can define our model classes with validation attributes in the System.ComponentModel.DataAnnotations namespace and it automatically enforces validation rules when a model object is updated or saved. Generic Repository for EF Code First We have created model classes and dbcontext class. Now we have to create generic repository pattern for data persistence with EF code first. If you don’t know about the repository pattern, checkout Martin Fowler’s article on Repository Let’s create a generic repository to working with DbContext and DbSet generics. public interface IRepository<T> where T : class     {         void Add(T entity);         void Delete(T entity);         T GetById(long Id);         IEnumerable<T> All();     } RepositoryBasse – Generic Repository class protected MyFinanceContext Database {     get { return database ?? (database = DatabaseFactory.Get()); } } public virtual void Add(T entity) {     dbset.Add(entity);            }        public virtual void Delete(T entity) {     dbset.Remove(entity); }   public virtual T GetById(long id) {     return dbset.Find(id); }   public virtual IEnumerable<T> All() {     return dbset.ToList(); } } DatabaseFactory class public class DatabaseFactory : Disposable, IDatabaseFactory {     private MyFinanceContext database;     public MyFinanceContext Get()     {         return database ?? (database = new MyFinanceContext());     }     protected override void DisposeCore()     {         if (database != null)             database.Dispose();     } } Unit of Work If you are new to Unit of Work pattern, checkout Fowler’s article on Unit of Work . According to Martin Fowler, the Unit of Work pattern "maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems." Let’s create a class for handling Unit of Work public interface IUnitOfWork {     void Commit(); } UniOfWork class public class UnitOfWork : IUnitOfWork {     private readonly IDatabaseFactory databaseFactory;     private MyFinanceContext dataContext;       public UnitOfWork(IDatabaseFactory databaseFactory)     {         this.databaseFactory = databaseFactory;     }       protected MyFinanceContext DataContext     {         get { return dataContext ?? (dataContext = databaseFactory.Get()); }     }       public void Commit()     {         DataContext.Commit();     } } The Commit method of the UnitOfWork will call the commit method of MyFinanceContext class and it will execute the SaveChanges method of DbContext class.   Repository class for Category In this post, we will be focusing on the persistence against Category entity and will working on other entities in later post. Let’s create a repository for handling CRUD operations for Category using derive from a generic Repository RepositoryBase<T>. public class CategoryRepository: RepositoryBase<Category>, ICategoryRepository     {     public CategoryRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface ICategoryRepository : IRepository<Category> { } If we need additional methods than generic repository for the Category, we can define in the CategoryRepository. Dependency Injection using Unity 2.0 If you are new to Inversion of Control/ Dependency Injection or Unity, please have a look on my articles at http://weblogs.asp.net/shijuvarghese/archive/tags/IoC/default.aspx. I want to create a custom lifetime manager for Unity to store container in the current HttpContext. public class HttpContextLifetimeManager<T> : LifetimeManager, IDisposable {     public override object GetValue()     {         return HttpContext.Current.Items[typeof(T).AssemblyQualifiedName];     }     public override void RemoveValue()     {         HttpContext.Current.Items.Remove(typeof(T).AssemblyQualifiedName);     }     public override void SetValue(object newValue)     {         HttpContext.Current.Items[typeof(T).AssemblyQualifiedName] = newValue;     }     public void Dispose()     {         RemoveValue();     } } Let’s create controller factory for Unity in the ASP.NET MVC 3 application.                 404, String.Format(                     "The controller for path '{0}' could not be found" +     "or it does not implement IController.",                 reqContext.HttpContext.Request.Path));       if (!typeof(IController).IsAssignableFrom(controllerType))         throw new ArgumentException(                 string.Format(                     "Type requested is not a controller: {0}",                     controllerType.Name),                     "controllerType");     try     {         controller= container.Resolve(controllerType) as IController;     }     catch (Exception ex)     {         throw new InvalidOperationException(String.Format(                                 "Error resolving controller {0}",                                 controllerType.Name), ex);     }     return controller; }   } Configure contract and concrete types in Unity Let’s configure our contract and concrete types in Unity for resolving our dependencies. private void ConfigureUnity() {     //Create UnityContainer               IUnityContainer container = new UnityContainer()                 .RegisterType<IDatabaseFactory, DatabaseFactory>(new HttpContextLifetimeManager<IDatabaseFactory>())     .RegisterType<IUnitOfWork, UnitOfWork>(new HttpContextLifetimeManager<IUnitOfWork>())     .RegisterType<ICategoryRepository, CategoryRepository>(new HttpContextLifetimeManager<ICategoryRepository>());                 //Set container for Controller Factory                ControllerBuilder.Current.SetControllerFactory(             new UnityControllerFactory(container)); } In the above ConfigureUnity method, we are registering our types onto Unity container with custom lifetime manager HttpContextLifetimeManager. Let’s call ConfigureUnity method in the Global.asax.cs for set controller factory for Unity and configuring the types with Unity. protected void Application_Start() {     AreaRegistration.RegisterAllAreas();     RegisterGlobalFilters(GlobalFilters.Filters);     RegisterRoutes(RouteTable.Routes);     ConfigureUnity(); } Developing web application using ASP.NET MVC 3 We have created our domain model for our web application and also have created repositories and configured dependencies with Unity container. Now we have to create controller classes and views for doing CRUD operations against the Category entity. Let’s create controller class for Category Category Controller public class CategoryController : Controller {     private readonly ICategoryRepository categoryRepository;     private readonly IUnitOfWork unitOfWork;           public CategoryController(ICategoryRepository categoryRepository, IUnitOfWork unitOfWork)     {         this.categoryRepository = categoryRepository;         this.unitOfWork = unitOfWork;     }       public ActionResult Index()     {         var categories = categoryRepository.All();         return View(categories);     }     [HttpGet]     public ActionResult Edit(int id)     {         var category = categoryRepository.GetById(id);         return View(category);     }       [HttpPost]     public ActionResult Edit(int id, FormCollection collection)     {         var category = categoryRepository.GetById(id);         if (TryUpdateModel(category))         {             unitOfWork.Commit();             return RedirectToAction("Index");         }         else return View(category);                 }       [HttpGet]     public ActionResult Create()     {         var category = new Category();         return View(category);     }           [HttpPost]     public ActionResult Create(Category category)     {         if (!ModelState.IsValid)         {             return View("Create", category);         }                     categoryRepository.Add(category);         unitOfWork.Commit();         return RedirectToAction("Index");     }       [HttpPost]     public ActionResult Delete(int  id)     {         var category = categoryRepository.GetById(id);         categoryRepository.Delete(category);         unitOfWork.Commit();         var categories = categoryRepository.All();         return PartialView("CategoryList", categories);       }        } Creating Views in Razor Now we are going to create views in Razor for our ASP.NET MVC 3 application.  Let’s create a partial view CategoryList.cshtml for listing category information and providing link for Edit and Delete operations. CategoryList.cshtml @using MyFinance.Helpers; @using MyFinance.Domain; @model IEnumerable<Category>      <table>         <tr>         <th>Actions</th>         <th>Name</th>          <th>Description</th>         </tr>     @foreach (var item in Model) {             <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.CategoryId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.CategoryId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divCategoryList" })                           </td>             <td>                 @item.Name             </td>             <td>                 @item.Description             </td>         </tr>         }       </table>     <p>         @Html.ActionLink("Create New", "Create")     </p> The delete link is providing Ajax functionality using the Ajax.ActionLink. This will call an Ajax request for Delete action method in the CategoryCotroller class. In the Delete action method, it will return Partial View CategoryList after deleting the record. We are using CategoryList view for the Ajax functionality and also for Index view using for displaying list of category information. Let’s create Index view using partial view CategoryList  Index.chtml @model IEnumerable<MyFinance.Domain.Category> @{     ViewBag.Title = "Index"; }    <h2>Category List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script>    <div id="divCategoryList">               @Html.Partial("CategoryList", Model) </div> We can call the partial views using Html.Partial helper method. Now we are going to create View pages for insert and update functionality for the Category. Both view pages are sharing common user interface for entering the category information. So I want to create an EditorTemplate for the Category information. We have to create the EditorTemplate with the same name of entity object so that we can refer it on view pages using @Html.EditorFor(model => model) . So let’s create template with name Category. Category.cshtml @model MyFinance.Domain.Category <div class="editor-label"> @Html.LabelFor(model => model.Name) </div> <div class="editor-field"> @Html.EditorFor(model => model.Name) @Html.ValidationMessageFor(model => model.Name) </div> <div class="editor-label"> @Html.LabelFor(model => model.Description) </div> <div class="editor-field"> @Html.EditorFor(model => model.Description) @Html.ValidationMessageFor(model => model.Description) </div> Let’s create view page for insert Category information @model MyFinance.Domain.Category   @{     ViewBag.Title = "Save"; }   <h2>Create</h2>   <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script>   @using (Html.BeginForm()) {     @Html.ValidationSummary(true)     <fieldset>         <legend>Category</legend>                @Html.EditorFor(model => model)               <p>             <input type="submit" value="Create" />         </p>     </fieldset> }   <div>     @Html.ActionLink("Back to List", "Index") </div> ViewStart file In Razor views, we can add a file named _viewstart.cshtml in the views directory  and this will be shared among the all views with in the Views directory. The below code in the _viewstart.cshtml, sets the Layout page for every Views in the Views folder.     @{     Layout = "~/Views/Shared/_Layout.cshtml"; } Tomorrow, we will cotinue the second part of this article. :)

    Read the article

  • Page replace with RJS

    - by Jiang
    Hi all, I try to implement a vote feature in one of my rails projects. I use the following codes (in vote.rjs) to replace the page with a Partial template (_vote.rhtml). But when I click, the vote number can not be updated immediately. I have to refresh the page to see the change. vote.rjs page.replace("votes#{@foundphoto.id}", :partial="vote", :locals={:voteable=@foundphoto}) The partial template is as follows: _vote.rhtml " <%= link_to_remote "+(#{voteable.votes_for})", :update="vote", :url = { :action="vote", :id=voteable.id, :vote="for"} % / <%= link_to_remote "-(#{voteable.votes_against})", :update="vote", :url = { :action="vote", :id=voteable.id, :vote="against"} % any ideas? Thanks.

    Read the article

  • textbox not getting refreshed

    - by oo
    i am doing an ajax call and i refresh a partial view. Inside the partial view i have this: <%=Html.TextBox("instance.Id", Model.Id)%> when i put a breakpoint here over Model.Id it has a number in it but after the ajax refresh is done the textbox just shows up with a 0. When i do a full browser refresh, the correct number shows up in the textbox. when i use firebug to look at data in my callback i see this: <input id="instance_Id" name="instance.Id" type="text" value="0" /> Everything else in the partial view refreshes fine. any ideas on what could be going wrong here?

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >