Search Results

Search found 1369 results on 55 pages for 'where clause'.

Page 54/55 | < Previous Page | 50 51 52 53 54 55  | Next Page >

  • C#/.NET Little Wonders: Use Cast() and TypeOf() to Change Sequence Type

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. We’ve seen how the Select() extension method lets you project a sequence from one type to a new type which is handy for getting just parts of items, or building new items.  But what happens when the items in the sequence are already the type you want, but the sequence itself is typed to an interface or super-type instead of the sub-type you need? For example, you may have a sequence of Rectangle stored in an IEnumerable<Shape> and want to consider it an IEnumerable<Rectangle> sequence instead.  Today we’ll look at two handy extension methods, Cast<TResult>() and OfType<TResult>() which help you with this task. Cast<TResult>() – Attempt to cast all items to type TResult So, the first thing we can do would be to attempt to create a sequence of TResult from every item in the source sequence.  Typically we’d do this if we had an IEnumerable<T> where we knew that every item was actually a TResult where TResult inherits/implements T. For example, assume the typical Shape example classes: 1: // abstract base class 2: public abstract class Shape { } 3:  4: // a basic rectangle 5: public class Rectangle : Shape 6: { 7: public int Widtgh { get; set; } 8: public int Height { get; set; } 9: } And let’s assume we have a sequence of Shape where every Shape is a Rectangle… 1: var shapes = new List<Shape> 2: { 3: new Rectangle { Width = 3, Height = 5 }, 4: new Rectangle { Width = 10, Height = 13 }, 5: // ... 6: }; To get the sequence of Shape as a sequence of Rectangle, of course, we could use a Select() clause, such as: 1: // select each Shape, cast it to Rectangle 2: var rectangles = shapes 3: .Select(s => (Rectangle)s) 4: .ToList(); But that’s a bit verbose, and fortunately there is already a facility built in and ready to use in the form of the Cast<TResult>() extension method: 1: // cast each item to Rectangle and store in a List<Rectangle> 2: var rectangles = shapes 3: .Cast<Rectangle>() 4: .ToList(); However, we should note that if anything in the list cannot be cast to a Rectangle, you will get an InvalidCastException thrown at runtime.  Thus, if our Shape sequence had a Circle in it, the call to Cast<Rectangle>() would have failed.  As such, you should only do this when you are reasonably sure of what the sequence actually contains (or are willing to handle an exception if you’re wrong). Another handy use of Cast<TResult>() is using it to convert an IEnumerable to an IEnumerable<T>.  If you look at the signature, you’ll see that the Cast<TResult>() extension method actually extends the older, object-based IEnumerable interface instead of the newer, generic IEnumerable<T>.  This is your gateway method for being able to use LINQ on older, non-generic sequences.  For example, consider the following: 1: // the older, non-generic collections are sequence of object 2: var shapes = new ArrayList 3: { 4: new Rectangle { Width = 3, Height = 13 }, 5: new Rectangle { Width = 10, Height = 20 }, 6: // ... 7: }; Since this is an older, object based collection, we cannot use the LINQ extension methods on it directly.  For example, if I wanted to query the Shape sequence for only those Rectangles whose Width is > 5, I can’t do this: 1: // compiler error, Where() operates on IEnumerable<T>, not IEnumerable 2: var bigRectangles = shapes.Where(r => r.Width > 5); However, I can use Cast<Rectangle>() to treat my ArrayList as an IEnumerable<Rectangle> and then do the query! 1: // ah, that’s better! 2: var bigRectangles = shapes.Cast<Rectangle>().Where(r => r.Width > 5); Or, if you prefer, in LINQ query expression syntax: 1: var bigRectangles = from s in shapes.Cast<Rectangle>() 2: where s.Width > 5 3: select s; One quick warning: Cast<TResult>() only attempts to cast, it won’t perform a cast conversion.  That is, consider this: 1: var intList = new List<int> { 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89 }; 2:  3: // casting ints to longs, this should work, right? 4: var asLong = intList.Cast<long>().ToList(); Will the code above work?  No, you’ll get a InvalidCastException. Remember that Cast<TResult>() is an extension of IEnumerable, thus it is a sequence of object, which means that it will box every int as an object as it enumerates over it, and there is no cast conversion from object to long, and thus the cast fails.  In other words, a cast from int to long will succeed because there is a conversion from int to long.  But a cast from int to object to long will not, because you can only unbox an item by casting it to its exact type. For more information on why cast-converting boxed values doesn’t work, see this post on The Dangers of Casting Boxed Values (here). OfType<TResult>() – Filter sequence to only items of type TResult So, we’ve seen how we can use Cast<TResult>() to change the type of our sequence, when we expect all the items of the sequence to be of a specific type.  But what do we do when a sequence contains many different types, and we are only concerned with a subset of a given type? For example, what if a sequence of Shape contains Rectangle and Circle instances, and we just want to select all of the Rectangle instances?  Well, let’s say we had this sequence of Shape: 1: var shapes = new List<Shape> 2: { 3: new Rectangle { Width = 3, Height = 5 }, 4: new Rectangle { Width = 10, Height = 13 }, 5: new Circle { Radius = 10 }, 6: new Square { Side = 13 }, 7: // ... 8: }; Well, we could get the rectangles using Select(), like: 1: var onlyRectangles = shapes.Where(s => s is Rectangle).ToList(); But fortunately, an easier way has already been written for us in the form of the OfType<T>() extension method: 1: // returns only a sequence of the shapes that are Rectangles 2: var onlyRectangles = shapes.OfType<Rectangle>().ToList(); Now we have a sequence of only the Rectangles in the original sequence, we can also use this to chain other queries that depend on Rectangles, such as: 1: // select only Rectangles, then filter to only those more than 2: // 5 units wide... 3: var onlyBigRectangles = shapes.OfType<Rectangle>() 4: .Where(r => r.Width > 5) 5: .ToList(); The OfType<Rectangle>() will filter the sequence to only the items that are of type Rectangle (or a subclass of it), and that results in an IEnumerable<Rectangle>, we can then apply the other LINQ extension methods to query that list further. Just as Cast<TResult>() is an extension method on IEnumerable (and not IEnumerable<T>), the same is true for OfType<T>().  This means that you can use OfType<TResult>() on object-based collections as well. For example, given an ArrayList containing Shapes, as below: 1: // object-based collections are a sequence of object 2: var shapes = new ArrayList 3: { 4: new Rectangle { Width = 3, Height = 5 }, 5: new Rectangle { Width = 10, Height = 13 }, 6: new Circle { Radius = 10 }, 7: new Square { Side = 13 }, 8: // ... 9: }; We can use OfType<Rectangle> to filter the sequence to only Rectangle items (and subclasses), and then chain other LINQ expressions, since we will then be of type IEnumerable<Rectangle>: 1: // OfType() converts the sequence of object to a new sequence 2: // containing only Rectangle or sub-types of Rectangle. 3: var onlyBigRectangles = shapes.OfType<Rectangle>() 4: .Where(r => r.Width > 5) 5: .ToList(); Summary So now we’ve seen two different ways to get a sequence of a superclass or interface down to a more specific sequence of a subclass or implementation.  The Cast<TResult>() method casts every item in the source sequence to type TResult, and the OfType<TResult>() method selects only those items in the source sequence that are of type TResult. You can use these to downcast sequences, or adapt older types and sequences that only implement IEnumerable (such as DataTable, ArrayList, etc.). Technorati Tags: C#,CSharp,.NET,LINQ,Little Wonders,TypeOf,Cast,IEnumerable<T>

    Read the article

  • Select number of rows for each group where two column values makes one group

    - by Fábio Antunes
    I have a two select statements joined by UNION ALL. In the first statement a where clause gathers only rows that have been shown previously to the user. The second statement gathers all rows that haven't been shown to the user, therefore I end up with the viewed results first and non-viewed results after. Of course this could simply be achieved with the same select statement using a simple ORDER BY, however the reason for two separate selects is simple after you realize what I hope to accomplish. Consider the following structure and data. +----+------+-----+--------+------+ | id | from | to | viewed | data | +----+------+-----+--------+------+ | 1 | 1 | 10 | true | .... | | 2 | 10 | 1 | true | .... | | 3 | 1 | 10 | true | .... | | 4 | 6 | 8 | true | .... | | 5 | 1 | 10 | true | .... | | 6 | 10 | 1 | true | .... | | 7 | 8 | 6 | true | .... | | 8 | 10 | 1 | true | .... | | 9 | 6 | 8 | true | .... | | 10 | 2 | 3 | true | .... | | 11 | 1 | 10 | true | .... | | 12 | 8 | 6 | true | .... | | 13 | 10 | 1 | false | .... | | 14 | 1 | 10 | false | .... | | 15 | 6 | 8 | false | .... | | 16 | 10 | 1 | false | .... | | 17 | 8 | 6 | false | .... | | 18 | 3 | 2 | false | .... | +----+------+-----+--------+------+ Basically I wish all non viewed rows to be selected by the statement, that is accomplished by checking weather the viewed column is true or false, pretty simple and straightforward, nothing to worry here. However when it comes to the rows already viewed, meaning the column viewed is TRUE, for those records I only want 3 rows to be returned for each group. The appropriate result in this instance should be the 3 most recent rows of each group. +----+------+-----+--------+------+ | id | from | to | viewed | data | +----+------+-----+--------+------+ | 6 | 10 | 1 | true | .... | | 7 | 8 | 6 | true | .... | | 8 | 10 | 1 | true | .... | | 9 | 6 | 8 | true | .... | | 10 | 2 | 3 | true | .... | | 11 | 1 | 10 | true | .... | | 12 | 8 | 6 | true | .... | +----+------+-----+--------+------+ As you see from the ideal result set we have three groups. Therefore the desired query for the viewed results should show a maximum of 3 rows for each grouping it finds. In this case these groupings were 10 with 1 and 8 with 6, both which had three rows to be shown, while the other group 2 with 3 only had one row to be shown. Please note that where from = x and to = y, makes the same grouping as if it was from = y and to = x. Therefore considering the first grouping (10 with 1), from = 10 and to = 1 is the same group if it was from = 1 and to = 10. However there are plenty of groups in the whole table that I only wish the 3 most recent of each to be returned in the select statement, and thats my problem, I not sure how that can be accomplished in the most efficient way possible considering the table will have hundreds if not thousands of records at some point. Thanks for your help. Note: The columns id, from, to and viewed are indexed, that should help with performance. PS: I'm unsure on how to name this question exactly, if you have a better idea, be my guest and edit the title.

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • Forms bound to updateable ADO recordsets are not updateable when the source includes a JOIN

    - by Art
    I'm developing an application in Access 2007. It uses an .accdb front end connecting to an SQL Server 2005 backend. I use forms that are bound to ADO recordsets at runtime. For the sake of efficiency, the recordsets usually contain only one record, and are queried out on the server: Public Sub SetUpFormRecordset(cn As ADODB.Connection, rstIn As ADODB.Recordset, rstSource As String) Dim cmd As ADODB.Command Dim I As Long Set cmd = New ADODB.Command cn.Errors.Clear ' Recordsets based on command object Execute method are Read Only! With cmd Set .ActiveConnection = cn .CommandType = adCmdText .CommandText = rstSource End With With rstIn .CursorType = adOpenKeyset .LockType = adLockPessimistic 'Check the locktype after opening; optimistic locking is worthless on a bound End With ' form, and ADO might open optimistically without firing an error! rstIn.Open cmd, , adOpenKeyset, adLockPessimistic 'This should run the query on the server and return an updatable recordset With cn If .Errors.Count <> 0 Then For Each errADO In .Errors Call HandleADOErrors(.Errors(I)) I = I + 1 Next errADO End If End With End Sub rstSource (the string containg the TSQL on which the recordset is based) is assembled by the calling routine, in this case from the Open event of the form being bound: Private Sub Form_Open(Cancel As Integer) Dim rst As ADODB.Recordset Dim strSource As String, DefaultSource as String Dim lngID As Long lngID = Forms!MyParent.CurrentID strSource = "SELECT TOP (100) PERCENT dbo.Customers.CustomerID, dbo.Customers.LegacyID, dbo.Customers.Active, dbo.Customers.TypeID, dbo.Customers.Category, " & _ "dbo.Customers.Source, dbo.Customers.CustomerName, dbo.Customers.CustAddrID, dbo.Customers.Email, dbo.Customers.TaxExempt, dbo.Customers.SalesTaxCode, " & _ "dbo.Customers.SalesTax2Code, dbo.Customers.CreditLimit, dbo.Customers.CreationDate, dbo.Customers.FirstOrder, dbo.Customers.LastOrder, " & _ "dbo.Customers.nOrders, dbo.Customers.Concurrency, dbo.Customers.LegacyLN, dbo.Addresses.AddrType, dbo.Addresses.AddrLine1, dbo.Addresses.AddrLine2, " & _ "dbo.Addresses.City, dbo.Addresses.State, dbo.Addresses.Country, dbo.Addresses.PostalCode, dbo.Addresses.PhoneLandline, dbo.Addresses.Concurrency " & _ "FROM dbo.Customers INNER JOIN " & _ "dbo.Addresses ON dbo.Customers.CustAddrID = dbo.Addresses.AddrID " strSource = strSource & "WHERE dbo.Customers.CustomerID= " & lngID With Me 'Default is Set up for editing one record If Not Nz(.RecordSource, vbNullString) = vbNullString Then If .Dirty Then .Dirty = False 'Save any changes on the form .RecordSource = vbNullString End If If rst Is Nothing Then 'Might not be first time through DefaultSource = .RecordSource Else rst.Close Set rst = Nothing End If End With Set rst = New ADODB.Recordset Call setupformrecordset(dbconn, rst, strSource) 'dbconn is a global variable With Me Set .Recordset = rst End With End Sub The recordset that is returned from setupformrecordset is fully updateable, and its .Supports property shows this. It can be edited and updated in code. The entire form, however, is read only, even though it's .AllowEdits and .AllowAdditions properties are both true. Even the fields from the right hand side (the 'many' side) cannot be edited. Removing the INNER JOIN clause from the TSQL (restricting strSource to one table) makes the form fully editable. I've verified that the TSQL includes priimary key fields from both tables, and each table includes a timestamp field for concurrency. I tried changing the .CursorType and .CursorLocation properties of the recordset to no avail. What am I doing wrong?

    Read the article

  • A Look Inside JSR 360 - CLDC 8

    - by Roger Brinkley
    If you didn't notice during JavaOne the Java Micro Edition took a major step forward in its consolidation with Java Standard Edition when JSR 360 was proposed to the JCP community. Over the last couple of years there has been a focus to move Java ME back in line with it's big brother Java SE. We see evidence of this in JCP itself which just recently merged the ME and SE/EE Executive Committees into a single Java Executive Committee. But just before that occurred JSR 360 was proposed and approved for development on October 29. So let's take a look at what changes are now being proposed. In a way JSR 360 is returning back to the original roots of Java ME when it was first introduced. It was indeed a subset of the JDK 4 language, but as Java progressed many of the language changes were not implemented in the Java ME. Back then the tradeoff was still a functionality, footprint trade off but the major market was feature phones. Today the market has changed and CLDC, while it will still target feature phones, will have it primary emphasis on embedded devices like wireless modules, smart meters, health care monitoring and other M2M devices. The major changes will come in three areas: language feature changes, library changes, and consolidating the Generic Connection Framework.  There have been three Java SE versions that have been implemented since JavaME was first developed so the language feature changes can be divided into changes that came in JDK 5 and those in JDK 7, which mostly consist of the project Coin changes. There were no language changes in JDK 6 but the changes from JDK 5 are: Assertions - Assertions enable you to test your assumptions about your program. For example, if you write a method that calculates the speed of a particle, you might assert that the calculated speed is less than the speed of light. In the example code below if the interval isn't between 0 and and 1,00 the an error of "Invalid value?" would be thrown. private void setInterval(int interval) { assert interval > 0 && interval <= 1000 : "Invalid value?" } Generics - Generics add stability to your code by making more of your bugs detectable at compile time. Code that uses generics has many benefits over non-generic code with: Stronger type checks at compile time. Elimination of casts. Enabling programming to implement generic algorithms. Enhanced for Loop - the enhanced for loop allows you to iterate through a collection without having to create an Iterator or without having to calculate beginning and end conditions for a counter variable. The enhanced for loop is the easiest of the new features to immediately incorporate in your code. In this tip you will see how the enhanced for loop replaces more traditional ways of sequentially accessing elements in a collection. void processList(Vector<string> list) { for (String item : list) { ... Autoboxing/Unboxing - This facility eliminates the drudgery of manual conversion between primitive types, such as int and wrapper types, such as Integer.  Hashtable<Integer, string=""> data = new Hashtable<>(); void add(int id, String value) { data.put(id, value); } Enumeration - Prior to JDK 5 enumerations were not typesafe, had no namespace, were brittle because they were compile time constants, and provided no informative print values. JDK 5 added support for enumerated types as a full-fledged class (dubbed an enum type). In addition to solving all the problems mentioned above, it allows you to add arbitrary methods and fields to an enum type, to implement arbitrary interfaces, and more. Enum types provide high-quality implementations of all the Object methods. They are Comparable and Serializable, and the serial form is designed to withstand arbitrary changes in the enum type. enum Season {WINTER, SPRING, SUMMER, FALL}; } private Season season; void setSeason(Season newSeason) { season = newSeason; } Varargs - Varargs eliminates the need for manually boxing up argument lists into an array when invoking methods that accept variable-length argument lists. The three periods after the final parameter's type indicate that the final argument may be passed as an array or as a sequence of arguments. Varargs can be used only in the final argument position. void warning(String format, String... parameters) { .. for(String p : parameters) { ...process(p);... } ... } Static Imports -The static import construct allows unqualified access to static members without inheriting from the type containing the static members. Instead, the program imports the members either individually or en masse. Once the static members have been imported, they may be used without qualification. The static import declaration is analogous to the normal import declaration. Where the normal import declaration imports classes from packages, allowing them to be used without package qualification, the static import declaration imports static members from classes, allowing them to be used without class qualification. import static data.Constants.RATIO; ... double r = Math.cos(RATIO * theta); Annotations - Annotations provide data about a program that is not part of the program itself. They have no direct effect on the operation of the code they annotate. There are a number of uses for annotations including information for the compiler, compiler-time and deployment-time processing, and run-time processing. They can be applied to a program's declarations of classes, fields, methods, and other program elements. @Deprecated public void clear(); The language changes from JDK 7 are little more familiar as they are mostly the changes from Project Coin: String in switch - Hey it only took us 18 years but the String class can be used in the expression of a switch statement. Fortunately for us it won't take that long for JavaME to adopt it. switch (arg) { case "-data": ... case "-out": ... Binary integral literals and underscores in numeric literals - Largely for readability, the integral types (byte, short, int, and long) can also be expressed using the binary number system. and any number of underscore characters (_) can appear anywhere between digits in a numerical literal. byte flags = 0b01001111; long mask = 0xfff0_ff08_4fff_0fffl; Multi-catch and more precise rethrow - A single catch block can handle more than one type of exception. In addition, the compiler performs more precise analysis of rethrown exceptions than earlier releases of Java SE. This enables you to specify more specific exception types in the throws clause of a method declaration. catch (IOException | InterruptedException ex) { logger.log(ex); throw ex; } Type Inference for Generic Instance Creation - Otherwise known as the diamond operator, the type arguments required to invoke the constructor of a generic class can be replaced with an empty set of type parameters (<>) as long as the compiler can infer the type arguments from the context.  map = new Hashtable<>(); Try-with-resource statement - The try-with-resources statement is a try statement that declares one or more resources. A resource is an object that must be closed after the program is finished with it. The try-with-resources statement ensures that each resource is closed at the end of the statement.  try (DataInputStream is = new DataInputStream(...)) { return is.readDouble(); } Simplified varargs method invocation - The Java compiler generates a warning at the declaration site of a varargs method or constructor with a non-reifiable varargs formal parameter. Java SE 7 introduced a compiler option -Xlint:varargs and the annotations @SafeVarargs and @SuppressWarnings({"unchecked", "varargs"}) to supress these warnings. On the library side there are new features that will be added to satisfy the language requirements above and some to improve the currently available set of APIs.  The library changes include: Collections update - New Collection, List, Set and Map, Iterable and Iteratator as well as implementations including Hashtable and Vector. Most of the work is too support generics String - New StringBuilder and CharSequence as well as a Stirng formatter. The javac compiler  now uses the the StringBuilder instead of String Buffer. Since StringBuilder is synchronized there is a performance increase which has necessitated the wahat String constructor works. Comparable interface - The comparable interface works with Collections, making it easier to reuse. Try with resources - Closeable and AutoCloseable Annotations - While support for Annotations is provided it will only be a compile time support. SuppressWarnings, Deprecated, Override NIO - There is a subset of NIO Buffer that have been in use on the of the graphics packages and needs to be pulled in and also support for NIO File IO subset. Platform extensibility via Service Providers (ServiceLoader) - ServiceLoader interface dos late bindings of interface to existing implementations. It helpe to package an interface and behavior of the implementation at a later point in time.Provider classes must have a zero-argument constructor so that they can be instantiated during loading. They are located and instantiated on demand and are identified via a provider-configuration file in the METAINF/services resource directory. This is a mechansim from Java SE. import com.XYZ.ServiceA; ServiceLoader<ServiceA> sl1= new ServiceLoader(ServiceA.class); Resources: META-INF/services/com.XYZ.ServiceA: ServiceAProvider1 ServiceAProvider2 ServiceAProvider3 META-INF/services/ServiceB: ServiceBProvider1 ServiceBProvider2 From JSR - I would rather use this list I think The Generic Connection Framework (GCF) was previously specified in a number of different JSRs including CLDC, MIDP, CDC 1.2, and JSR 197. JSR 360 represents a rare opportunity to consolidated and reintegrate parts that were duplicated in other specifications into a single specification, upgrade the APIs as well provide new functionality. The proposal is to specify a combined GCF specification that can be used with Java ME or Java SE and be backwards compatible with previous implementations. Because of size limitations as well as the complexity of the some features like InvokeDynamic and Unicode 6 will not be included. Additionally, any language or library changes in JDK 8 will be not be included. On the upside, with all the changes being made, backwards compatibility will still be maintained. JSR 360 is a major step forward for Java ME in terms of platform modernization, language alignment, and embedded support. If you're interested in following the progress of this JSR see the JSR's java.net project for details of the email lists, discussions groups.

    Read the article

  • Paging, sorting and filtering in a stored procedure (SQL Server)

    - by Fruitbat
    I was looking at different ways of writing a stored procedure to return a "page" of data. This was for use with the asp ObjectDataSource, but it could be considered a more general problem. The requirement is to return a subset of the data based on the usual paging paremeters, startPageIndex and maximumRows, but also a sortBy parameter to allow the data to be sorted. Also there are some parameters passed in to filter the data on various conditions. One common way to do this seems to be something like this: [Method 1] ;WITH stuff AS ( SELECT CASE WHEN @SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name) WHEN @SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC) WHEN @SortBy = ... ELSE ROW_NUMBER() OVER (ORDER BY whatever) END AS Row, ., ., ., FROM Table1 INNER JOIN Table2 ... LEFT JOIN Table3 ... WHERE ... (lots of things to check) ) SELECT * FROM stuff WHERE (Row > @startRowIndex) AND (Row <= @startRowIndex + @maximumRows OR @maximumRows <= 0) ORDER BY Row One problem with this is that it doesn't give the total count and generally we need another stored procedure for that. This second stored procedure has to replicate the parameter list and the complex WHERE clause. Not nice. One solution is to append an extra column to the final select list, (SELECT COUNT(*) FROM stuff) AS TotalRows. This gives us the total but repeats it for every row in the result set, which is not ideal. [Method 2] An interesting alternative is given here (http://www.4guysfromrolla.com/articles/032206-1.aspx) using dynamic SQL. He reckons that the performance is better because the CASE statement in the first solution drags things down. Fair enough, and this solution makes it easy to get the totalRows and slap it into an output parameter. But I hate coding dynamic SQL. All that 'bit of SQL ' + STR(@parm1) +' bit more SQL' gubbins. [Method 3] The only way I can find to get what I want, without repeating code which would have to be synchronised, and keeping things reasonably readable is to go back to the "old way" of using a table variable: DECLARE @stuff TABLE (Row INT, ...) INSERT INTO @stuff SELECT CASE WHEN @SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name) WHEN @SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC) WHEN @SortBy = ... ELSE ROW_NUMBER() OVER (ORDER BY whatever) END AS Row, ., ., ., FROM Table1 INNER JOIN Table2 ... LEFT JOIN Table3 ... WHERE ... (lots of things to check) SELECT * FROM stuff WHERE (Row > @startRowIndex) AND (Row <= @startRowIndex + @maximumRows OR @maximumRows <= 0) ORDER BY Row (Or a similar method using an IDENTITY column on the table variable). Here I can just add a SELECT COUNT on the table variable to get the totalRows and put it into an output parameter. I did some tests and with a fairly simple version of the query (no sortBy and no filter), method 1 seems to come up on top (almost twice as quick as the other 2). Then I decided to test probably I needed the complexity and I needed the SQL to be in stored procedures. With this I get method 1 taking nearly twice as long as the other 2 methods. Which seems strange. Is there any good reason why I shouldn't spurn CTEs and stick with method 3? UPDATE - 15 March 2012 I tried adapting Method 1 to dump the page from the CTE into a temporary table so that I could extract the TotalRows and then select just the relevant columns for the resultset. This seemed to add significantly to the time (more than I expected). I should add that I'm running this on a laptop with SQL Server Express 2008 (all that I have available) but still the comparison should be valid. I looked again at the dynamic SQL method. It turns out I wasn't really doing it properly (just concatenating strings together). I set it up as in the documentation for sp_executesql (with a parameter description string and parameter list) and it's much more readable. Also this method runs fastest in my environment. Why that should be still baffles me, but I guess the answer is hinted at in Hogan's comment.

    Read the article

  • CodePlex Daily Summary for Wednesday, August 20, 2014

    CodePlex Daily Summary for Wednesday, August 20, 2014Popular ReleasesMolGridCal & MolCal: MolGridCal tutorial v1.1: Update the contents for grid computing virtual screening.SharePoint 2013 Search Query Tool: SharePoint 2013 Search Query Tool v2.1: Layout improvements Bug fixes Stores auth method and user name Moved experimental settings to Advanced boxCtrlAltStudio Viewer: CtrlAltStudio Viewer 1.2.2.41183 Alpha: This alpha of the CtrlAltStudio Viewer provides some preliminary Oculus Rift DK2 support. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-2-2-41183-alpha Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of Second Life.SQL Scripts to Improve DNN Performance: Turbo Scripts 0.9.1 for DNN 7.1.2 - 7.3.2: Support for DNN 7.3.2 added Fixed a typo in GetVendor procedure New goodies TurboDbConfig and TurboDNNSettings added (re-uploaded with fixed file name)The IRIS Toolbox Project: IRIS Toolbox Release 20140819: Regular Updates Issue 66 (66) closed, bug fixed. Calls to irisstartup and iriscleanup occasionally crashed when older IRIS versions were on the search path. Issue 65 (65) closed, bug fixed. The function tseries/convert crashed when called with 'method=' 'first' or 'last'.ConEmu - Windows console with tabs: ConEmu 140819 [Alpha]: ConEmu - developer build x86 and x64 versions. Written in C++, no additional packages required. Run "ConEmu.exe" or "ConEmu64.exe". Some useful information you may found: http://superuser.com/questions/tagged/conemu http://code.google.com/p/conemu-maximus5/wiki/ConEmuFAQ http://code.google.com/p/conemu-maximus5/wiki/TableOfContents If you want to use ConEmu in portable mode, just create empty "ConEmu.xml" file near to "ConEmu.exe" HDD Guardian: HDD Guardian 0.6.1: New: package now include smartctl 6.3; Removed: standard notification e-mail. Now you have to set your mail server to send e-mail alerts; Bugfix: USB detection error; custom e-mail server settings issue; bottom panel displays a wrong ATA error count.VG-Ripper & PG-Ripper: VG-Ripper 2.9.62: changes NEW: Added Support for 'MadImage.org' links NEW: Added Support for 'ImgSpot.org' links NEW: Added Support for 'ImgClick.net' links NEW: Added Support for 'Imaaage.com' links NEW: Added Support for 'Image-Bugs.com' links NEW: Added Support for 'Pictomania.org' links NEW: Added Support for 'ImgDap.com' links NEW: Added Support for 'FileSpit.com' links FIXED: 'ImgSee.me' linksLinq 4 Javascript: Version 2.4: Minor Changes Made Added Count() and Count(with where clause) Distinct will now use a dictionary instead of a custom dictionary object Organize the unit tests. The variable names will actually make sense and won't be 2 letters. SelectMany will now use the queryable logic.Magick.NET: Magick.NET 7.0.0.0001: Magick.NET linked with ImageMagick 7-Beta.CMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.GW2 Personal Assistant Overlay: GW2 Personal Assistant Overlay 1.1: Overview1.1 is the second 'stable' release of the GW2 Personal Assistant Overlay. This version includes just a couple of very minor features and some minor bug fixes. For details regarding installation, setup, and general use, see Documentation. Note: If you were using a previous version, you will probably want to copy over the following user settings files: GW2PAO.DungeonSettings.xml GW2PAO.EventSettings.xml GW2PAO.WvWSettings.xml GW2PAO.ZoneCompletionSettings.xml New FeaturesAdded new "No...Fluentx: Fluentx v1.5.3: Added few more extension methods.fastJSON: v2.1.2: 2.1.2 - bug fix circular referencesJPush.NET: JPush Server SDK 1.2.1 (For JPush V3): Assembly: 1.2.1.24728 JPush REST API Version: v3 JPush Documentation Reference .NET framework: v4.0 or above. Sample: class: JPushClientV3 2014 Augest 15th.SEToolbox: SEToolbox 01.043.008 Release 1: Changed ship/station names to use new DisplayName instead of Beacon/Antenna. Fixed issue with updated SE binaries 01.043.018 using new Voxel Material definitions.Google .Net API: Drive.Sample: Google .NET Client API – Drive.SampleInstructions for the Google .NET Client API – Drive.Sample</h2> http://code.google.com/p/google-api-dotnet-client/source/browse/?repo=samples#hg%2FDrive.SampleBrowse Source, or main file http://code.google.com/p/google-api-dotnet-client/source/browse/Drive.Sample/Program.cs?repo=samplesProgram.cs <h3>1. Checkout Instructions</h3> <p><b>Prerequisites:</b> Install Visual Studio, and <a href="http://mercurial.selenic.com/">Mercurial</a>.</p> ...FineUI - jQuery / ExtJS based ASP.NET Controls: FineUI v4.1.1: -??Form??????????????(???-5929)。 -?TemplateField??ExpandOnDoubleClick、ExpandOnEnter、ExpandToSelectRow????(LZOM-5932)。 -BodyPadding???????,??“5”“5 10”,???????????“5px”“5px 10px”。 -??TriggerBox?EnableEdit=false????,??????????????(Jango_Jing-5450)。 -???????????DataKeyNames???????????(yygy-6002)。 -????????????????????????(Gnid-6018)。 -??PageManager???AutoSizePanelID????,??????????????????(yygy-6008)。 -?FState???????????????,????????????????(????-5925)。 -??????OnClientClick???return?????????(FineU...DNN CMS Platform: 07.03.02: Major Highlights Fixed backwards compatibility issue with 3rd party control panels Fixed issue in the drag and drop functionality of the File Uploader in IE 11 and Safari Fixed issue where users were able to create pages with the same name Fixed issue that affected older versions of DNN that do not include the maxAllowedContentLength during upgrade Fixed issue that stopped some skins from being upgraded to newer versions Fixed issue that randomly showed an unexpected error during us...WordMat: WordMat for Mac: WordMat for Mac has a few limitations compared to the Windows version - Graph is not supported (Gnuplot, GeoGebra and Excel works) - Units are not supported yet (Coming up) The Mac version is yet as tested as the windows version.New ProjectsCryptography Enumerations JavaScript Shell: Javascript shell for CAPICOM enumsFamiglia Information Server: Famiglia ProjectHustle: Hustle is a small utility that can be used to manage threads when executing command line tools. The tool was built to take advantage parallel loading in TM1.LuckyDraw: an lucky drawer for 2nd Sudoku Competition Powered by PythonMachina Aurum WebDriver Powershell: Powershell commands to the http://www.w3.org/TR/webdriver/ specification (works on Internet Explorer 11 and Chrome)ODBC Connect: ODBC Connect is a Windows user interface for 32bit and 64bit ODBC data sources. Generate SQL statements, export to file, run from the command line to automate.Post Publisher: Post Publisher is a post publishing platform. This is a work-in-progress.Samurai Framework: Samurai is a game framework written in C# using OpenGL for graphics. It aims to be a higher level abstraction rather than just a simple OpenGL binding.SharePoint 2013 tokenizing user/group autocomplete: SharePoint 2013 tokenizing user/group autocomplete plugin allow user to search/select users/groups.SharePoint 2013 User Lync Presence using jQuery: SharePoint 2013 User Lync Presence jQuery plugin renders the presence for particular user with or without picture.SharePoint Search Box User/Group AutoComplete: SharePoint Search box User/Group autocomplete is a jQuery plugin to quick search the user/group from OOB search input control.TI Helper: TI Helper is an Auto Hot Key script that can be used to enhance the options in the IBM Cognos TM1 Turbo Integrator (TI) process editor.trade-software: for trading educationWPF Graphing: CUSTOME GRAPHING SOLUTIONYoutubeChannelDownloader: I love to see video on my favorite channel,and I want to download videos on this channel to watch later. So I made this app.

    Read the article

  • Neo4j increasing latency as SKIP increases on Cypher query + REST API

    - by voldomazta
    My setup: Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode) Neo4j 2.0.0-M06 Enterprise First I made sure I warmed up the cache by executing the following: START n=node(*) RETURN COUNT(n); START r=relationship(*) RETURN count(r); The size of the table is 63,677 nodes and 7,169,995 relationships Now I have the following query: START u1=node:node_auto_index('uid:39') MATCH (u1:user)-[w:WANTS]->(c:card)<-[h:HAS]-(u2:user) WHERE u2.uid <> 39 WITH u2.uid AS uid, (CASE WHEN w.qty < h.qty THEN w.qty ELSE h.qty END) AS have RETURN uid, SUM(have) AS total ORDER BY total DESC SKIP 0 LIMIT 25 This UID has about 40k+ results that I want to be able to put a pagination to. The initial skip was around 773ms. I tried page 2 (skip 25) and the latency was around the same even up to page 500 it only rose up to 900ms so I didn't really bother. Now I tried some fast forward paging and jumped by thousands so I did 1000, then 2000, then 3000. I was hoping the ORDER BY arrangement will already have been cached by Neo4j and using SKIP will just move to that index in the result and wont have to iterate through each one again. But for each thousand skip I made the latency increased by alot. It's not just cache warming because for one I already warmed up the cache and two, I tried the same skip a couple of times for each skip and it yielded the same results: SKIP 0: 773ms SKIP 1000: 1369ms SKIP 2000: 2491ms SKIP 3000: 3899ms SKIP 4000: 5686ms SKIP 5000: 7424ms Now who the hell would want to view 5000 pages of results? 40k even?! :) Good point! I will probably put a cap on the maximum results a user can view but I was just curious about this phenomenon. Will somebody please explain why Neo4j seems to be re-iterating through stuff which appears to be already known to it? Here is my profiling for the 0 skip: ==> ColumnFilter(symKeys=["uid", " INTERNAL_AGGREGATE65c4d6a2-1930-4f32-8fd9-5e4399ce6f14"], returnItemNames=["uid", "total"], _rows=25, _db_hits=0) ==> Slice(skip="Literal(0)", _rows=25, _db_hits=0) ==> Top(orderBy=["SortItem(Cached( INTERNAL_AGGREGATE65c4d6a2-1930-4f32-8fd9-5e4399ce6f14 of type Any),false)"], limit="Add(Literal(0),Literal(25))", _rows=25, _db_hits=0) ==> EagerAggregation(keys=["uid"], aggregates=["( INTERNAL_AGGREGATE65c4d6a2-1930-4f32-8fd9-5e4399ce6f14,Sum(have))"], _rows=41659, _db_hits=0) ==> ColumnFilter(symKeys=["have", "u1", "uid", "c", "h", "w", "u2"], returnItemNames=["uid", "have"], _rows=146826, _db_hits=0) ==> Extract(symKeys=["u1", "c", "h", "w", "u2"], exprKeys=["uid", "have"], _rows=146826, _db_hits=587304) ==> Filter(pred="((NOT(Product(u2,uid(0),true) == Literal(39)) AND hasLabel(u1:user(0))) AND hasLabel(u2:user(0)))", _rows=146826, _db_hits=146826) ==> TraversalMatcher(trail="(u1)-[w:WANTS WHERE (hasLabel(NodeIdentifier():card(1)) AND hasLabel(NodeIdentifier():card(1))) AND true]->(c)<-[h:HAS WHERE (NOT(Product(NodeIdentifier(),uid(0),true) == Literal(39)) AND hasLabel(NodeIdentifier():user(0))) AND true]-(u2)", _rows=146826, _db_hits=293696) And for the 5000 skip: ==> ColumnFilter(symKeys=["uid", " INTERNAL_AGGREGATE99329ea5-03cd-4d53-a6bc-3ad554b47872"], returnItemNames=["uid", "total"], _rows=25, _db_hits=0) ==> Slice(skip="Literal(5000)", _rows=25, _db_hits=0) ==> Top(orderBy=["SortItem(Cached( INTERNAL_AGGREGATE99329ea5-03cd-4d53-a6bc-3ad554b47872 of type Any),false)"], limit="Add(Literal(5000),Literal(25))", _rows=5025, _db_hits=0) ==> EagerAggregation(keys=["uid"], aggregates=["( INTERNAL_AGGREGATE99329ea5-03cd-4d53-a6bc-3ad554b47872,Sum(have))"], _rows=41659, _db_hits=0) ==> ColumnFilter(symKeys=["have", "u1", "uid", "c", "h", "w", "u2"], returnItemNames=["uid", "have"], _rows=146826, _db_hits=0) ==> Extract(symKeys=["u1", "c", "h", "w", "u2"], exprKeys=["uid", "have"], _rows=146826, _db_hits=587304) ==> Filter(pred="((NOT(Product(u2,uid(0),true) == Literal(39)) AND hasLabel(u1:user(0))) AND hasLabel(u2:user(0)))", _rows=146826, _db_hits=146826) ==> TraversalMatcher(trail="(u1)-[w:WANTS WHERE (hasLabel(NodeIdentifier():card(1)) AND hasLabel(NodeIdentifier():card(1))) AND true]->(c)<-[h:HAS WHERE (NOT(Product(NodeIdentifier(),uid(0),true) == Literal(39)) AND hasLabel(NodeIdentifier():user(0))) AND true]-(u2)", _rows=146826, _db_hits=293696) The only difference is the LIMIT clause on the Top function. I hope we can make this work as intended, I really don't want to delve into doing an embedded Neo4j + my own Jetty REST API for the web app.

    Read the article

  • Data Source Security Part 3

    - by Steve Felts
    In part one, I introduced the security features and talked about the default behavior.  In part two, I defined the two major approaches to security credentials: directly using database credentials and mapping WLS user credentials to database credentials.  Now it's time to get down to a couple of the security options (each of which can use database credentials or WLS credentials). Set Client Identifier on Connection When "Set Client Identifier" is enabled on the data source, a client property is associated with the connection.  The underlying SQL user remains unchanged for the life of the connection but the client value can change.  This information can be used for accounting, auditing, or debugging.  The client property is based on either the WebLogic user mapped to a database user using the credential map Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} or is the database user parameter directly from the getConnection() method, based on the “use database credentials” setting described earlier. To enable this feature, select “Set Client ID On Connection” in the Console.  See "Enable Set Client ID On Connection for a JDBC data source" http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/EnableCredentialMapping.html in Oracle WebLogic Server Administration Console Help. The Set Client Identifier feature is only available for use with the Oracle thin driver and the IBM DB2 driver, based on the following interfaces. For pre-Oracle 12c, oracle.jdbc.OracleConnection.setClientIdentifier(client) is used.  See http://docs.oracle.com/cd/B28359_01/network.111/b28531/authentication.htm#i1009003 for more information about how to use this for auditing and debugging.   You can get the value using getClientIdentifier()  from the driver.  To get back the value from the database as part of a SQL query, use a statement like the following. “select sys_context('USERENV','CLIENT_IDENTIFIER') from DUAL”. Starting in Oracle 12c, java.sql.Connection.setClientInfo(“OCSID.CLIENTID", client) is used.  This is a JDBC standard API, although the property values are proprietary.  A problem with setClientIdentifier usage is that there are pieces of the Oracle technology stack that set and depend on this value.  If application code also sets this value, it can cause problems. This has been addressed with setClientInfo by making use of this method a privileged operation. A well-managed container can restrict the Java security policy grants to specific namespaces and code bases, and protect the container from out-of-control user code. When running with the Java security manager, permission must be granted in the Java security policy file for permission "oracle.jdbc.OracleSQLPermission" "clientInfo.OCSID.CLIENTID"; Using the name “OCSID.CLIENTID" allows for upward compatible use of “select sys_context('USERENV','CLIENT_IDENTIFIER') from DUAL” or use the JDBC standard API java.sql.getClientInfo(“OCSID.CLIENTID") to retrieve the value. This value in the Oracle USERENV context can be used to drive the Oracle Virtual Private Database (VPD) feature to create security policies to control database access at the row and column level. Essentially, Oracle Virtual Private Database adds a dynamic WHERE clause to a SQL statement that is issued against the table, view, or synonym to which an Oracle Virtual Private Database security policy was applied.  See Using Oracle Virtual Private Database to Control Data Access http://docs.oracle.com/cd/B28359_01/network.111/b28531/vpd.htm for more information about VPD.  Using this data source feature means that no programming is needed on the WLS side to set this context; it is set and cleared by the WLS data source code. For the IBM DB2 driver, com.ibm.db2.jcc.DB2Connection.setDB2ClientUser(client) is used for older releases (prior to version 9.5).  This specifies the current client user name for the connection. Note that the current client user name can change during a connection (unlike the user).  This value is also available in the CURRENT CLIENT_USERID special register.  You can select it using a statement like “select CURRENT CLIENT_USERID from SYSIBM.SYSTABLES”. When running the IBM DB2 driver with JDBC 4.0 (starting with version 9.5), java.sql.Connection.setClientInfo(“ClientUser”, client) is used.  You can retrieve the value using java.sql.Connection.getClientInfo(“ClientUser”) instead of the DB2 proprietary API (even if set setDB2ClientUser()).  Oracle Proxy Session Oracle proxy authentication allows one JDBC connection to act as a proxy for multiple (serial) light-weight user connections to an Oracle database with the thin driver.  You can configure a WebLogic data source to allow a client to connect to a database through an application server as a proxy user. The client authenticates with the application server and the application server authenticates with the Oracle database. This allows the client's user name to be maintained on the connection with the database. Use the following steps to configure proxy authentication on a connection to an Oracle database. 1. If you have not yet done so, create the necessary database users. 2. On the Oracle database, provide CONNECT THROUGH privileges. For example: SQL> ALTER USER connectionuser GRANT CONNECT THROUGH dbuser; where “connectionuser” is the name of the application user to be authenticated and “dbuser” is an Oracle database user. 3. Create a generic or GridLink data source and set the user to the value of dbuser. 4a. To use WLS credentials, create an entry in the credential map that maps the value of wlsuser to the value of dbuser, as described earlier.   4b. To use database credentials, enable “Use Database Credentials”, as described earlier. 5. Enable Oracle Proxy Authentication, see "Configure Oracle parameters" in Oracle WebLogic Server Administration Console Help. 6. Log on to a WebLogic Server instance using the value of wlsuser or dbuser. 6. Get a connection using getConnection(username, password).  The credentials are based on either the WebLogic user that is mapped to a database user or the database user directly, based on the “use database credentials” setting.  You can see the current user and proxy user by executing: “select user, sys_context('USERENV','PROXY_USER') from DUAL". Note: getConnection fails if “Use Database Credentials” is not enabled and the value of the user/password is not valid for a WebLogic Server user.  Conversely, it fails if “Use Database Credentials” is enabled and the value of the user/password is not valid for a database user. A proxy session is opened on the connection based on the user each time a connection request is made on the pool. The proxy session is closed when the connection is returned to the pool.  Opening or closing a proxy session has the following impact on JDBC objects. - Closes any existing statements (including result sets) from the original connection. - Clears the WebLogic Server statement cache. - Clears the client identifier, if set. -The WebLogic Server test statement for a connection is recreated for every proxy session. These behaviors may impact applications that share a connection across instances and expect some state to be associated with the connection. Oracle proxy session is also implicitly enabled when use-database-credentials is enabled and getConnection(user, password) is called,starting in WLS Release 10.3.6.  Remember that this only works when using the Oracle thin driver. To summarize, the definition of oracle-proxy-session is as follows. - If proxy authentication is enabled and identity based pooling is also enabled, it is an error. - If a user is specified on getConnection() and identity-based-connection-pooling-enabled is false, then oracle-proxy-session is treated as true implicitly (it can also be explicitly true). - If a user is specified on getConnection() and identity-based-connection-pooling-enabled is true, then oracle-proxy-session is treated as false.

    Read the article

  • Advantage database throws an exception when attempting to delete a record with a like statement used

    - by ChrisR
    The code below shows that a record is deleted when the sql statement is: select * from test where qty between 50 and 59 but the sql statement: select * from test where partno like 'PART/005%' throws the exception: Advantage.Data.Provider.AdsException: Error 5072: Action requires read-write access to the table How can you reliably delete a record with a where clause applied? Note: I'm using Advantage Database v9.10.1.9, VS2008, .Net Framework 3.5 and WinXP 32 bit using System.IO; using Advantage.Data.Provider; using AdvantageClientEngine; using NUnit.Framework; namespace NetworkEidetics.Core.Tests.Dbf { [TestFixture] public class AdvantageDatabaseTests { private const string DefaultConnectionString = @"data source={0};ServerType=local;TableType=ADS_CDX;LockMode=COMPATIBLE;TrimTrailingSpaces=TRUE;ShowDeleted=FALSE"; private const string TestFilesDirectory = "./TestFiles"; [SetUp] public void Setup() { const string createSql = @"CREATE TABLE [{0}] (ITEM_NO char(4), PARTNO char(20), QTY numeric(6,0), QUOTE numeric(12,4)) "; const string insertSql = @"INSERT INTO [{0}] (ITEM_NO, PARTNO, QTY, QUOTE) VALUES('{1}', '{2}', {3}, {4})"; const string filename = "test.dbf"; var connectionString = string.Format(DefaultConnectionString, TestFilesDirectory); using (var connection = new AdsConnection(connectionString)) { connection.Open(); using (var transaction = connection.BeginTransaction()) { using (var command = connection.CreateCommand()) { command.CommandText = string.Format(createSql, filename); command.Transaction = transaction; command.ExecuteNonQuery(); } transaction.Commit(); } using (var transaction = connection.BeginTransaction()) { for (var i = 0; i < 1000; ++i) { using (var command = connection.CreateCommand()) { var itemNo = string.Format("{0}", i); var partNumber = string.Format("PART/{0:d4}", i); var quantity = i; var quote = i * 10; command.CommandText = string.Format(insertSql, filename, itemNo, partNumber, quantity, quote); command.Transaction = transaction; command.ExecuteNonQuery(); } } transaction.Commit(); } connection.Close(); } } [TearDown] public void TearDown() { File.Delete("./TestFiles/test.dbf"); } [Test] public void CanDeleteRecord() { const string sqlStatement = @"select * from test"; Assert.AreEqual(1000, GetRecordCount(sqlStatement)); DeleteRecord(sqlStatement, 3); Assert.AreEqual(999, GetRecordCount(sqlStatement)); } [Test] public void CanDeleteRecordBetween() { const string sqlStatement = @"select * from test where qty between 50 and 59"; Assert.AreEqual(10, GetRecordCount(sqlStatement)); DeleteRecord(sqlStatement, 3); Assert.AreEqual(9, GetRecordCount(sqlStatement)); } [Test] public void CanDeleteRecordWithLike() { const string sqlStatement = @"select * from test where partno like 'PART/005%'"; Assert.AreEqual(10, GetRecordCount(sqlStatement)); DeleteRecord(sqlStatement, 3); Assert.AreEqual(9, GetRecordCount(sqlStatement)); } public int GetRecordCount(string sqlStatement) { var connectionString = string.Format(DefaultConnectionString, TestFilesDirectory); using (var connection = new AdsConnection(connectionString)) { connection.Open(); using (var command = connection.CreateCommand()) { command.CommandText = sqlStatement; var reader = command.ExecuteExtendedReader(); return reader.GetRecordCount(AdsExtendedReader.FilterOption.RespectFilters); } } } public void DeleteRecord(string sqlStatement, int rowIndex) { var connectionString = string.Format(DefaultConnectionString, TestFilesDirectory); using (var connection = new AdsConnection(connectionString)) { connection.Open(); using (var command = connection.CreateCommand()) { command.CommandText = sqlStatement; var reader = command.ExecuteExtendedReader(); reader.GotoBOF(); reader.Read(); if (rowIndex != 0) { ACE.AdsSkip(reader.AdsActiveHandle, rowIndex); } reader.DeleteRecord(); } connection.Close(); } } } }

    Read the article

  • extensible database design: automatic ALTER TABLE or serialize() field BLOB ?

    - by mario
    I want an adaptable database scheme. But still use a simple table data gateway in my application, where I just pass an $data[] array for storing. The basic columns are settled in the initial table scheme. There will however arise a couple of meta fields later (ca 10-20). I want some flexibility there and not adapt the database manually each time, or -worse- change the application logic just because of new fields. So now there are two options which seem workable yet not overkill. But I'm not sure about the scalability or database drawbacks. (1) Automatic ALTER TABLE. Whenever the $data array is to be saved, the keys are compared against the current database columns. New columns get defined before the $data is INSERTed into the table. Actually seems simple enough in test code: function save($data, $table="forum") { // columns if ($new_fields = array_diff(array_keys($data), known_fields($table))) { extend_schema($table, $new_fields, $data); } // save $columns = implode("`, `", array_keys($data)); $qm = str_repeat(",?", count(array_keys($data)) - 1); echo ("INSERT INTO `$table` (`$columns`) VALUES (?$qm);"); function known_fields($table) { return unserialize(@file_get_contents("db:$table")) ?: array("id"); function extend_schema($table, $new_fields, $data) { foreach ($new_fields as $field) { echo("ALTER TABLE `$table` ADD COLUMN `$field` VARCHAR;"); Since it is mostly meta information fields, adding them just as VARCHAR seems sufficient. Nobody will query by them anyway. So the database is really just used as storage here. However, while I might want to add a lot of new $data fields on the go, they will not always be populated. (2) serialize() fields into BLOB. Any new/extraneous meta fields could be opaque to the database. Simply sorting out the virtual fields from the real database columns is simple. And the meta fields can just be serialize()d into a blob/text field then: function ext_save($data, $table="forum") { $db_fields = array("id", "content", "flags", "ext"); // disjoin foreach (array_diff(array_keys($data),$db_fields) as $key) { $data["ext"][$key] = $data[$key]; unset($data[$key]); } $data["ext"] = serialize($data["ext"]); Unserializing and unpacking this 'ext' column on read queries is a minor overhead. The advantage is that there won't be any sparsely filled columns in the database, so I guess it's compacter and faster than the AUTO ALTER TABLE approach. Of course, this method prevents ever using one of the new fields in a WHERE or GROUP BY clause. But I think none of the possible meta fields (user_agent, author_ip, author_img, votes, hits, last_modified, ..) would/should ever be used there anyway. So I currently prefer the 'ext' blob approach, even if it's a one-way ticket. How are such columns called usually? (looking for examples/doc) Would you use XML serialization for (very theoretical) in-database queries? The adapting table scheme seems a "cleaner" interface, even if most columns might remain empty then. How does that impact speed? How many such sparse VARCHAR fields can MySQL/innodb stomach? But most importantly: Is there any standard implementation for this? A pseudo ORM with automatic ALTER TABLE tricks? Storing a simple column list seems workable, but something like pdo::getColumnMeta would be more robust.

    Read the article

  • Make interchangeable class types via pointer casting only, without having to allocate any new objects?

    - by HostileFork
    UPDATE: I do appreciate "don't want that, want this instead" suggestions. They are useful, especially when provided in context of the motivating scenario. Still...regardless of goodness/badness, I've become curious to find a hard-and-fast "yes that can be done legally in C++11" vs "no it is not possible to do something like that". I want to "alias" an object pointer as another type, for the sole purpose of adding some helper methods. The alias cannot add data members to the underlying class (in fact, the more I can prevent that from happening the better!) All aliases are equally applicable to any object of this type...it's just helpful if the type system can hint which alias is likely the most appropriate. There should be no information about any specific alias that is ever encoded in the underlying object. Hence, I feel like you should be able to "cheat" the type system and just let it be an annotation...checked at compile time, but ultimately irrelevant to the runtime casting. Something along these lines: Node<AccessorFoo>* fooPtr = Node<AccessorFoo>::createViaFactory(); Node<AccessorBar>* barPtr = reinterpret_cast< Node<AccessorBar>* >(fooPtr); Under the hood, the factory method is actually making a NodeBase class, and then using a similar reinterpret_cast to return it as a Node<AccessorFoo>*. The easy way to avoid this is to make these lightweight classes that wrap nodes and are passed around by value. Thus you don't need casting, just Accessor classes that take the node handle to wrap in their constructor: AccessorFoo foo (NodeBase::createViaFactory()); AccessorBar bar (foo.getNode()); But if I don't have to pay for all that, I don't want to. That would involve--for instance--making a special accessor type for each sort of wrapped pointer (AccessorFooShared, AccessorFooUnique, AccessorFooWeak, etc.) Having these typed pointers being aliased for one single pointer-based object identity is preferable, and provides a nice orthogonality. So back to that original question: Node<AccessorFoo>* fooPtr = Node<AccessorFoo>::createViaFactory(); Node<AccessorBar>* barPtr = reinterpret_cast< Node<AccessorBar>* >(fooPtr); Seems like there would be some way to do this that might be ugly but not "break the rules". According to ISO14882:2011(e) 5.2.10-7: An object pointer can be explicitly converted to an object pointer of a different type.70 When a prvalue v of type "pointer to T1" is converted to the type "pointer to cv T2", the result is static_cast(static_cast(v)) if both T1 and T2 are standard-layout types (3.9) and the alignment requirements of T2 are no stricter than those of T1, or if either type is void. Converting a prvalue of type "pointer to T1" to the type "pointer to T2" (where T1 and T2 are object types and where the alignment requirements of T2 are no stricter than those of T1) and back to its original type yields the original pointer value. The result of any other such pointer conversion is unspecified. Drilling into the definition of a "standard-layout class", we find: has no non-static data members of type non-standard-layout-class (or array of such types) or reference, and has no virtual functions (10.3) and no virtual base classes (10.1), and has the same access control (clause 11) for all non-static data members, and has no non-standard-layout base classes, and either has no non-static data member in the most-derived class and at most one base class with non-static data members, or has no base classes with non-static data members, and has no base classes of the same type as the first non-static data member. Sounds like working with something like this would tie my hands a bit with no virtual methods in the accessors or the node. Yet C++11 apparently has std::is_standard_layout to keep things checked. Can this be done safely? Appears to work in gcc-4.7, but I'd like to be sure I'm not invoking undefined behavior.

    Read the article

  • CodePlex Daily Summary for Saturday, August 23, 2014

    CodePlex Daily Summary for Saturday, August 23, 2014Popular ReleasesDIII Save Editor: ROS Alpha 1.2.14.100: initial Ros alpha release please report all bugsSEToolbox: SEToolbox 01.044.014 Release 2: Fixed Ship name not saving. Fixed broken cubes view Bug. Fixed cast VRage.MyFixedPoint error when opening games with Meteors. Added checkbox when Importing 3d model to Export ship, to fill it as solid.CS-Script Source: Release v3.8.5: Fixed problem with the warnings getting hidden in case of the successful compilation cs-script.7z - CS-Script Suite (binaries, documentation, samples) cs-script.ExtensionPack.7z - CS-Script Extension Pack (additional binaries and samples) cs-scriptDocs.7z - CS-Script DocumentationOutlook 2013 Backup Add-In: Outlook Backup Add-In 1.3: Changelog for new version: Added button in config-window to reset the last backup-time (this will trigger the backup after closing outlook) Minimum interval set to 0 (backup at each closing of outlook) Catch exception when data store entry is corrupt Added two parameters (prefix and suffix) to automatically rename the backup file Updated VSTO-Runtime to 10.0.50325 Upgraded project to Visual Studio 2013 Added optional command to run after backup (e.g. pack backup files, ...) Add...babelua: 1.6.7.0: V1.6.7.0 - 2014.8.21New feature: add a file search window ( ctrl+1 or ALT+L ), like The file search in VC Assistant; Stability improvement: performance improvement when BabeLua load/unload; performance improvement when debugger load lua files;Open NFe: RDI Open NFe 3.0 (alpha): Atualização para o layout 3.10 da NFe.MSSQL Deployment Tool: Microsoft SQL Deploy Tool v1.3.1: MicrosoftSqlDeployTool: v1.3.1.38348 What's changed? Update namespace and assembly name. Bug fixing.SharePoint 2013 Search Query Tool: SharePoint 2013 Search Query Tool v2.1: Layout improvements Bug fixes Stores auth method and user name Moved experimental settings to Advanced boxCtrlAltStudio Viewer: CtrlAltStudio Viewer 1.2.2.41183 Alpha: This alpha of the CtrlAltStudio Viewer provides some preliminary Oculus Rift DK2 support. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-2-2-41183-alpha Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of Second Life.HDD Guardian: HDD Guardian 0.6.1: New: package now include smartctl 6.3; Removed: standard notification e-mail. Now you have to set your mail server to send e-mail alerts; Bugfix: USB detection error; custom e-mail server settings issue; bottom panel displays a wrong ATA error count.VG-Ripper & PG-Ripper: VG-Ripper 2.9.62: changes NEW: Added Support for 'MadImage.org' links NEW: Added Support for 'ImgSpot.org' links NEW: Added Support for 'ImgClick.net' links NEW: Added Support for 'Imaaage.com' links NEW: Added Support for 'Image-Bugs.com' links NEW: Added Support for 'Pictomania.org' links NEW: Added Support for 'ImgDap.com' links NEW: Added Support for 'FileSpit.com' links FIXED: 'ImgSee.me' linksExchange Database Recovery With and Without Log Files is Possible: Exchange Recovery Application: This Exchange Recovery Software comes with free trial edition which helps users to inspect the working capability of the recovery process. Download free demo version and repair inaccessible mailboxes from EDB file without any obstructions.Linq 4 Javascript: Version 2.4: Minor Changes Made Added Count() and Count(with where clause) Distinct will now use a dictionary instead of a custom dictionary object Organize the unit tests. The variable names will actually make sense and won't be 2 letters. SelectMany will now use the queryable logic.Office / SharePoint 2013 Continuous Integration with TFS 2012: 1.1.0.1: Fixed the following issues in TfsDropDrownloader: Updated to make it work with VS 2013 (including VS 2013 updates) in addition to VS2012. Extend the timeout of downloading drops from 100 seconds to 1 hour. Added more trouble shooting information in the output.CRM Solution CommandLine Helper: CRM Solution Cmd Helper 1.0.0.4: Includes : - Bug fix = Export argument validation : check directory path existence (thanks mszlapa)Office To PDF: OfficeToPDF 1.4: Adds support for additional file types: * mpp (requires MS Project >= 2010) * vsdx, vsdm (requires MS Visio >= 2013) * csv * odt, odc, odp * pot, potm, potx Improves stability and clean removal of COM objects. Adds new flags: * /verbose - to be more verbose when running * /markup - to allow document markup in the PDF when converting Word documents * /excel_max_rows - adds a maximum limit on the number of rows a worksheet can contain when converting Excel documents * /pdfa - crea...MongoRepository: MongoRepository 1.6.6: Installing using NuGet (recommended)MongoRepository is now a NuGet package for your convenience. Step-by-step instructions can be found in Installing MongoRepository using NuGet Installing using BinariesYou can also choose to download the binaries instead of using NuGet. There are 2 downloads: mongorepository_full.x.x.x contains all binaries required (MongoRepository and the 10gen C# driver) mongorepository.x.x.x contains only the MongoRepository binary Make sure you reference MongoReposit...Cryptography Enumerations JavaScript Shell: Cryptography Enumerations JavaScript Shell 1.0.0: First ReleaseCMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.GW2 Personal Assistant Overlay: GW2 Personal Assistant Overlay 1.1: Overview1.1 is the second 'stable' release of the GW2 Personal Assistant Overlay. This version includes just a couple of very minor features and some minor bug fixes. For details regarding installation, setup, and general use, see Documentation. Note: If you were using a previous version, you will probably want to copy over the following user settings files: GW2PAO.DungeonSettings.xml GW2PAO.EventSettings.xml GW2PAO.WvWSettings.xml GW2PAO.ZoneCompletionSettings.xml New FeaturesAdded new "No...New Projects3D Projectile: A 3D Projectile program showing the motion of a ballASP.NET Web Application Starter Kit: This project template is an ASP.NET solution skeleton for a typical web application or single-page application (SPA).Behaving - Behaviour Tree for C#: Behaviour is a Behaviour Tree implementation in C#.Kinect Stream Saver Application _SDK 2: This application is developed based on a sample called "ColorBasics-D2D C++" developed by Microsoft corporation. (Compatible with SDK 2: K4W v2 Dev Preview)MVC Bootstrap Paginator: The MVC Bootstrap Paginator is lightweight and easy to use. It's works out of the box and requires minimal configuration.NuGet Reference Switcher: NuGet Reference Switcher is a Visual Studio extension which can be used to automatically switch NuGet DLL references to project references and vice-versa. QKit: A WP8.1 library that provides various controls and classes that will help developers quickly and easily augment their apps to behave more like native apps.SharePoint 2013 Document Icon Linker: Links the document icon in library views to the document.SharePoint Autocomplete People Search: SharePoint People SearchWADM: WADM

    Read the article

  • Can I retrieve objects from a complex query that limits results to fields from a single table?

    - by Sean Redmond
    I have a model whose rows I always want to sort based on the values in another associated model and I was thinking that the way to implement this would be to use set_dataset in the model. This is causing query results to be returned as hashes rather than objects, though, so none of the methods from the class can be used when iterating over the dataset. I basically have two classes class SortFields < Sequel::Model(:sort_fields) set_primary_key :objectid end class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid end Some backstory: the data is imported from a legacy system into mysql. The values in sort_fields are calculated from multiple other associated tables (some one-to-many, some many-to-many) according to some complicated rules. The likely solution will be to just add the values in sort_fields to items (I want to keep the imported data separate from the calculated data, but I don't have to). First, though, I just want to understand how far you can go with a dataset and still get objects rather than hashes. If I set the dataset to sort on a field in items like so class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset(order(:sortnumber)) end then the expected clause is added to the generated SQL, e.g.: >> Items.limit(1).sql => "SELECT * FROM `items` ORDER BY `sortnumber` LIMIT 1" and queries still return objects: >> Items.limit(1).first.class => Items If I order it by the associated fields though... class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset( eager_graph(:sort_fields). order(:sort1, :sort2, :sort3) ) end ...I get hashes ?> Items.limit(1).first.class => Hash My first thought was that this happens because all fields from sort_fields are included in the results and maybe if selected only the fields from items I would get Items objects again: class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset( eager_graph(:sort_fields). select(:items.*). order(:sort1, :sort2, :sort3) ) end The generated SQL is what I would expect: >> Items.limit(1).sql => "SELECT `items`.* FROM `items` LEFT OUTER JOIN `sort_fields` ON (`sort_fields`.`objectid` = `items`.`objectid`) ORDER BY `sort1`, `sort2`, `sort3` LIMIT 1" It returns the same rows as the set_dataset(order(:sortnumber)) version but it still doesn't work: >> Items.limit(1).first.class => Hash Before I add the sort fields to the items table so that they can all live happily in the same model, is there a way to tell Sequel to return on object when it wants to return a hash?

    Read the article

  • Jquery / PhP / Joomla Select one of two comboboxes does not get updated

    - by bluesbrother
    I am making a Joomla component wich has 3 comboboxes/selects on the page. One with languages and 2 with subjects. If you change the language the other two get filled with the same data (the subjects in the selected language) the name of the selectbox are different but otherwise the same. I get an error for one of the subject boxes (hence the url gets red), but there is no logic in wich one will give an error. In Firebug i get the HTML back for the one without the other and this one gets updated but the other one gives nothing back. If i right click in firebug on the one that gave the error, and do "send again" it will load fine. Is their a timing problem? The change event of the language selectbox: jQuery('#cmbldcoi_ldlink_language').bind('change', function() { var cmbLangID = jQuery('#cmbldcoi_ldlink_language').val(); if (cmbLangID !=0) { getSubjectCmb_lang(cmbLangID, 'cmbldcoi_ldlink_subjects', '#ldlinksubjects'); } }); Function that requests the php file to create the html for the select: function getSubjectCmb_lang(langID, cmbName, DivWhereIn) { var xdate = new Date().getTime(); var url = 'index.php?option=com_ldadmin&view=ldadmin&format=raw&task=getcmbsubj_lang&langid=' + langID + '&cmbname=' + cmbName + '&'+ xdate; jQuery(DivWhereIn).load(url, function(){ }); } And in the php file there is a connection made to the database to ge the information to build the selectbox. I use a function for this that is okay because it makes al my selectboxes. The only place where there are problems with select boxes is on the pages that has 2 selects that need to change when a third one changed. My guess it is somewhere in the Jquery where this goes wrong. And i think it has to do with timing. But i am open for all sugestions. Thanx. UPDATE: No the ID and Name fields are different. They are named : cmbldcoi_child cmbldcoi_parent Here is my code: The change event for the first combobox which makes the other two change: jQuery('#cmbldcoi_language_chain_subj').bind('change', function(){ var langID = jQuery('#cmbldcoi_language_chain_subj').val(); if (langID != 0){ getSubjectCmb_lang(langID, 'cmbldcoi_child', '#div_cmbldcoi_child'); getSubjectCmb_lang(langID, 'cmbldcoi_parent', '#div_cmbldcoi_parent'); } }); } The function wicht calls the php file to get the info from the database: function getSubjectCmb_lang(langID, cmbName, DivWhereIn){ var xdate = new Date().getTime(); var url = 'index.php?option=com_ldadmin&view=ldadmin&format=raw&task=getcmbsubj_lang&langid=' + langID + '&cmbname=' + cmbName + '&'+ xdate; jQuery(DivWhereIn).load(url, function(){ }); } The PHP code function getcmbsubj_lang(){ $langid = JRequest::getVar('langid'); if ($langid > 0 ){ $langid = JRequest::getVar('langid'); }else{ $langid = 1; } $cmbName = JRequest::getVar('cmbname'); //$lang_sufx = self::get_#__sufx($langid); print ld_html::ld_create_cmb_html($cmbName, '#__ldcoi_subjects','id', 'subject_name', " WHERE id_language={$langid} ORDER BY subject_name" ); } There is a class wich is called ld_html wich has an funnction in it that creates a combobox. ld_html::ld_create_cmb_html() It gets an table name, id field, namefield and optional an where clause. It all works fine if there is just one combobox thats needs updating. It give a problem when there are two. Thanks for the help !

    Read the article

  • CodePlex Daily Summary for Tuesday, August 19, 2014

    CodePlex Daily Summary for Tuesday, August 19, 2014Popular ReleasesVG-Ripper & PG-Ripper: VG-Ripper 2.9.62: changes NEW: Added Support for 'MadImage.org' links NEW: Added Support for 'ImgSpot.org' links NEW: Added Support for 'ImgClick.net' links NEW: Added Support for 'Imaaage.com' links NEW: Added Support for 'Image-Bugs.com' links NEW: Added Support for 'Pictomania.org' links NEW: Added Support for 'ImgDap.com' links NEW: Added Support for 'FileSpit.com' links FIXED: 'ImgSee.me' linksLinq 4 Javascript: Version 2.4: Minor Changes Made Added Count() and Count(with where clause) Distinct will now use a dictionary instead of a custom dictionary object Organize the unit tests. The variable names will actually make sense and won't be 2 letters. SelectMany will now use the queryable logic.Magick.NET: Magick.NET 7.0.0.0001: Magick.NET linked with ImageMagick 7-Beta.CMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.GW2 Personal Assistant Overlay: GW2 Personal Assistant Overlay 1.1: Overview1.1 is the second 'stable' release of the GW2 Personal Assistant Overlay. This version includes just a couple of very minor features and some minor bug fixes. For details regarding installation, setup, and general use, see Documentation. Note: If you were using a previous version, you will probably want to copy over the following user settings files: GW2PAO.DungeonSettings.xml GW2PAO.EventSettings.xml GW2PAO.WvWSettings.xml GW2PAO.ZoneCompletionSettings.xml New FeaturesAdded new "No...OpenCppCoverage: OpenCppCoverage 0.9.1: - Add Jenkins support. - Command line argument can be placed inside a config file. If you do not have Visual Studio C++ 2013 you need to download redistributable packages: http://www.microsoft.com/en-us/download/details.aspx?id=40784Easy Backup Windows Service: Release 2.0 with CU: Fix log error when "To" directory not exist in fyle system. Force run program as administrator by default. Add 'everyday' schedule element. Update solution to VS 2013.Easy Backup Application: Release 2.0 with CU: Fix log error when "To" directory not exist in fyle system. Fix app location initialization. Force run program as administrator by default. Update solution to VS 2013.TEBookConverter: 1.5: Added: Turkish and French translations Added: A few interface changes Removed: SkinDynamulet: Dynamulet v0.1: DynamoDB Transaction Server v0.1Console parallel nunit tests runner: ConsoleUnitTestsRunner 1.03: bugfixingFluentx: Fluentx v1.5.3: Added few more extension methods.fastJSON: v2.1.2: 2.1.2 - bug fix circular referencesJPush.NET: JPush Server SDK 1.2.1 (For JPush V3): Assembly: 1.2.1.24728 JPush REST API Version: v3 JPush Documentation Reference .NET framework: v4.0 or above. Sample: class: JPushClientV3 2014 Augest 15th.SEToolbox: SEToolbox 01.043.008 Release 1: Changed ship/station names to use new DisplayName instead of Beacon/Antenna. Fixed issue with updated SE binaries 01.043.018 using new Voxel Material definitions.Google .Net API: Drive.Sample: Google .NET Client API – Drive.SampleInstructions for the Google .NET Client API – Drive.Sample</h2> http://code.google.com/p/google-api-dotnet-client/source/browse/?repo=samples#hg%2FDrive.SampleBrowse Source, or main file http://code.google.com/p/google-api-dotnet-client/source/browse/Drive.Sample/Program.cs?repo=samplesProgram.cs <h3>1. Checkout Instructions</h3> <p><b>Prerequisites:</b> Install Visual Studio, and <a href="http://mercurial.selenic.com/">Mercurial</a>.</p> ...FineUI - jQuery / ExtJS based ASP.NET Controls: FineUI v4.1.1: -??Form??????????????(???-5929)。 -?TemplateField??ExpandOnDoubleClick、ExpandOnEnter、ExpandToSelectRow????(LZOM-5932)。 -BodyPadding???????,??“5”“5 10”,???????????“5px”“5px 10px”。 -??TriggerBox?EnableEdit=false????,??????????????(Jango_Jing-5450)。 -???????????DataKeyNames???????????(yygy-6002)。 -????????????????????????(Gnid-6018)。 -??PageManager???AutoSizePanelID????,??????????????????(yygy-6008)。 -?FState???????????????,????????????????(????-5925)。 -??????OnClientClick???return?????????(FineU...DNN CMS Platform: 07.03.02: Major Highlights Fixed backwards compatibility issue with 3rd party control panels Fixed issue in the drag and drop functionality of the File Uploader in IE 11 and Safari Fixed issue where users were able to create pages with the same name Fixed issue that affected older versions of DNN that do not include the maxAllowedContentLength during upgrade Fixed issue that stopped some skins from being upgraded to newer versions Fixed issue that randomly showed an unexpected error during us...WordMat: WordMat for Mac: WordMat for Mac has a few limitations compared to the Windows version - Graph is not supported (Gnuplot, GeoGebra and Excel works) - Units are not supported yet (Coming up) The Mac version is yet as tested as the windows version.MFCMAPI: August 2014 Release: Build: 15.0.0.1042 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010/2013 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeNew Projectsangle.works: Angle.works sample solutionAutomatically Update CMS from SCOM: Microsoft Central Management Server (CMS) was released with SQL Server 2008; however keeping the server list up-to-date is challenging.BLE for NETMF: Class library targeted to .NET Micro Framework for BLE support (both central and peripheral profiles).CIIS: ciisClipboard Calculator: Simple tool to calculate numbers from the clipboard. Useful for SQL Server Management Studio User.CYCLOPS: CYCLOPS is a project that seeks to aid the development and testing of ANPR software. CYCLOPS provides a virtual camera and virtual frame grabber for image procedemo4: demo4eAsset: Web based Fixed Asset Management SystemEFML-Runtime: Create applications with xml,css,js feeadmin: Complete fee managementFinal Fantasy VI Randomizer: This is a simple program that randomizes skills, items, and characters in Final Fantasy VI. It is used for racing the game.Object Editor: Object Editor is a visual editor which allows to edit any properties of any class and to simplify the changing and displaying of a class instance.pbb: Pinoy Badminton Buddies Web ApplicationRubyFeed: RubyFeed is a stand-alone HTML5 page for Internet Explorer on Windows 8 and above that makes it easy to view and manage your Windows RSS Platform feeds.

    Read the article

  • UPDATE FOR BI PUBLISHER ENTERPRISE 10.1.3.4.1 MARCH 2010

    - by Tim Dexter
    Latest roll up patch for 10.1.3.4.1 is now out in the wild. Yep, there are bug fixes but the guys have implemented some great enhancements. I'll be covering some of them over the coming weeks, from collapsing bookmarks in your PDFs to better MS AD support to 'true' Excel templates, yes you read that correctly! Patch is available from Oracle's support site. Just search for patch 9546699. Here's the contents and readme, apologies for the big list but at least you can search against it for a particular fix. This patch contains backports of following bugs for BI Publisher Enterprise 10.1.3.4.0 and 10.1.3.4.1. 6193342 - REG:SAMPLE DATA FILE FOR PDF FORM MAPPING IS NOT VALIDATED 6261875 - ERRONEOUS PRECISION VALIDATION ON ONLINE ANALYZER 6439437 - NULL POINTER EXCEPTION WHEN PROCESSING TABLE OF CONTENT 6460974 - BACS EFT PAYMENT INSTRUCTION OUTPUT FILE IS EMPTY 6939721 - BIP: REPORT BUSTING DELIVERY KEY VALUES CANNOT CONTAIN SEVERAL SPECIAL CHARACTER 6996069 - USING XML DB FOR BI REPOSITORY FAILS WITH RESOURCENOTFOUNDEXCEPTION 7207434 - TIMEZONE:SHOULD NOT DO TIMEZONE CONVERSION AGAINST CANONICAL DATE YYYY-MM-DD 7371531 - SUPPORT FOR CSV OUTPUT FOR STRUCTURED XML AND NON SQL DATA SOURCES 7596148 - ER: LDAP FOR MS AD TO SEARCH FROM AD ROOT 7646139 - WEBSERVICES ERROR 7829516 - BIP STANDALONE FAILS TO BURST USING XSL-FO TEMPLATES 8219848 - PDF TEMPLATE REPORT NOT PERFORMING PAGE BREAK 8232116 - PARAMETER VALUE IS PASSED AS NULL,IF IT CONTAINS 'AND' WITHIN THE STRING 8250690 - NOT ABLE TO UPLOAD TEMPLATE VIA BIP API 8288459 - ER: QUERY BUILDER OPTION TO NOT INCLUDE TABLENAME. PREFIX IN SQL 8289600 - REPORT TITLE AND DESCRIPTION CAN'T SUPPORT MULTIPLE LANGUAGES 8327080 - CAN NOT CONFIGURE ORACLE EBUSINESS SUITE SECURITY MODEL WITH ORACLE RAC 8332164 - AN XDO PROPERTY TO ENABLE DEBUG LOGGING 8333289 - WEB SERVICE JOBS FAIL AFTER BIP STARTED UP 8340239 - HTTP NOTIFY IS MISSING IN SCHEDULEREPORTREQUEST 8360933 - UNABLE TO USE LOGGED IN BI USER AS THE WSSECURITY USERNAME IN A VARIABLE FORMAT 8400744 - ADMINISTRATOR USER DOES NOT HAVE FULL ADMINISTRATOR RIGHTS 8402436 - CRASH CAUSED BY UNDETERMINED ATTRKEY ERROR IN MULTI-THREADED 8403779 - IMPOSSIBLE TO CONFIGURE PARAMETER FOR A REPORT 8412259 - PDF, RTF OUTPUT NOT HANDLING THE TABLE BORDER AND CONTENT OVERFLOWS TO NEXT PAGE 8483919 - DYNAMIC DATASOURCE WEBSERVICE SHOULD WORK WITH SERVERSIDE CONNECTIONS 8444382 - ID ATTRIBUTE IN TITLE-PAGE DOES NOT WORK WITH SELECTACTION PROPERTY 8446681 - UI LANGUAGE IS NOT REFLECTED AT THE FIRST LOG IN 8449884 - PUBLICREPORTSERVICE FAILS ON EMAIL DELIVERY USING BIP 10.1.3.4.0D+ - NPE 8454858 - DB: XMLP_ADMIN CAN SEE ALL THE FOLDERS BUT ONLY HAS VIEW PERMISSIONS 8458818 - PDFBOOKBINDER FAILS WITH OUTOFMEMORY ERROR WHEN TRYING TO BIND > 1500 PDFS 8463992 - INCORRECT IMPLEMENTATION OF XLIFF SPECIFICATION 8468777 - BI PUBLISHER QUERY BUILDER NOT LOADING SCHEMA OBJECTS 8477310 - QUERY BUILDER NOT WORK WITH SSL ON STANDALONE OC4J 8506701 - POSITIVE PAY FILE WITH OPTIONS NOT CREATING FILE CHECKS OVER 2500 8506761 - PERFORMANCE: PDFBOOKBINDER CLASS TAKES 4 HOURS TO BIND 4000 PAGES 8535604 - NPE WHEN CLICKING "ANALYZER FOR EXCEL" BUTTON IN ALL_* REPORTS 8536246 - REMOVE-PDF-FIELDS DOES NOT WORK WITH CHECKBOXES WITH OPT ARRAY 8541792 - NULLPOINTER EXCEPTION WHILE USING SFTP PROTOCOL 8554443 - LOGGING TIME STAMP IN 10G: THE HOUR PART IS WRONG 8558007 - UNABLE TO LOGIN BIP WITH UNPRIVILEGED USER WHEN XDB IS USED FOR REPORSITORY STOR 8565758 - NEED TO CONNECT IMPERSONATION TO DATA SOURCE WITH PL/SQL FUNCTION 8567235 - EFTPROCESSOR AND XDO DEBUG ENABLED CAUSES ORG.XML.SAX.SAXPARSEEXCEPTION 8572216 - EFTPROCESSOR NOT THREAD SAFE - CAUSING CORRUPTED REPORTS TO BE GENERATED 8575776 - LANDSCAPE REPORT ORIENTAION NOT SELECTED WHEN REPORT IS PRINTED WITH PS 8588330 - XLIFF GENERATING WITH WRONG MAXWIDTH ATTRIBUTE IN SOME TRANS-UNITS 8584446 - EFTGENERATOR DOES NOT USE XSLT SCALABILITY - JAVA.LANG.OUTOFMEMORY EXCEPTION 8594954 - ENG: BIP NOTIFY MESSAGE BECOMES ENGLISH 8599646 - ER:EXTRA SPACE ADDED BELOW IMAGE IN A TABLE CELL OF TEMPLATE IN FIREFOX 8605110 - PDFSIGNATURE API ENCOUNTERS JAVA.LANG.NULLPOINTEREXCEPTION ON PDF WITH WATERMARK 8660915 - BURSTING WITH DATA TEMPLATE NOT WORKING WITH OPTION: VALUE=FALSE 8660920 - ER: EXTRACT XHTML DATA USING XDODTEXE IN XHTML FORMAT 8667150 - PROBLEM WITH 3RD APPLICATION ABOUT PDF GENERATED WITH BI PUBLISHER 8683547 - "CLICK VIEW REPORT BUTTON TO GENERATE THE REPORT" MESSAGE IS DISPLAYED 8713080 - SEARCH" PARAMETER IS NOT SHOWING NON ENGLISH DATA IN INTERNET EXPLORER 8724778 - EXCEL ANALYZER PARAMETERS DO NOT WORK WITH EXCEL 2007 8725450 - UIX 2.3.6.6 UPTAKE FOR 10.1.3.4.1 8728807 - DYNAMIC JDBC DATA SOURCE WITH PRE-PROCESS FUNCTION BASED ON EXISTING DATA SOURCE 8759558 - XDO TEMPLATE SHOWS CURRENCY IN WRONG FORMAT FOR DUNNING 8792894 - EFTPROCESSOR DOES NOT SUPPORT XSL TEMPLATE AS INPUTSTREAM 8793550 - BIP GENERATES CSV REPORTS OUTPUT FORMAT WITH EXTENTION .OUT NOT .CSV IN EMAIL 8819869 - PERIOD CLOSE VALUE SUMMARY REPORT (XML) RUNNING INTO WARNING 8825732 - MY FOLDERS LINK BROKEN WITH USER NAME THAT INCLUDES A SLASH (/) OBIEE SECURITY 8831948 - TRYING TO GENERATE A SCATTER PLOT USING THE CHART WIZARD 8842299 - SEEDED QUERY ALWAYS RETURNS RESULTS BASED ON FIRST COLUMN 8858027 - NODE.GETTEXTCONTEXT() NOT AVAILABLE IN 10G UNDER OC4J 8859957 - REPORT TITLE ALIGNMENT GOES BAD FOR REPORTS WITH XLIFF FILE ATTACHED 8860957 - ER: IMPROVE PERFORMANCE OF ANSWERS PARAMETERS 8891537 - GETREPORTPARAMETERS WEB SERVICE API ISSUES WITH OAAM REPORTS 8891558 - GETTING SQLEXCEPTION IN GENERATEREPORT WEB SERVICE API ON OAAM REPORTS 8927796 - ER: DYANAMIC DATA SOURCE SUPPORT BY DATA SOURCE NAME 8969898 - BI PUBLISHER WEB SERVICE GETREPORTPARAMETERS DOES NOT TRANSLATE PARAMETER LABEL 8998967 - MULTIPLE XSL PREDICATES ELEMENT[A='A'] [B='B'] CAUSES XML-22019 ERROR 9012511 - SCALABLE MODE IS NOT WORKING IN XMLPUBLISHER 10.1.3.4 9016976 - ER: PRINT XSL-T AND FOPROCESSING TIMING INFORMATION 9018580 - WEB SERVICE CALL FAILS WHEN REPORT INCLUDES SEARCH TYPE 9018657 - JOB FAILS WHEN LOV QUERY CONTAINS BIND VARIABLES :XDO_USER_UI_LOCALE 9021224 - PERFORMANCE ISSUE TO VIEW DASHBOARD PAGE WITH BIP REPORT LINKS 9022440 - ER: SUPPORT "COMB OF N CHARACTERS" FEATURE PDF FORM TEXT FIELDS 9026236 - XPATH DOES NOT WORK CORRECTLY IN 10.1.3.4.1 9051652 - FILE EXTENSION OF CSV OUTPUT IS TXT WHEN IT IS EXPORTED FROM REPORT VIEWER 9053770 - WHEN SENDING CSV REPORT OUTPUT BY EMAIL SOMETIMES IT IS SENT WITHOUT EXTENSION 9066483 - PDFBOOKBINDER LEAVE SOME TEMPORARY FILES AFTER MERGING TITLE PAGE OR TOC 9102420 - USE RELATIVE PATHS IN HYPERLINKS 9127185 - CHECKBOX NOT WORK ON SUB TEMPLATE 9149679 - BASE URL IS NOT PASSED CORRECTLY 9149691 - PROVIDE A WAY TO DISABLE THE ABILITY TO CREATE SCHEDULED REPORT JOB "PUBLIC" 9167822 - NOTIFICATION URL BREAKS ON FOLDER NAMES WITH SPACES 9167913 - CHARTS ARE MISSING IN PDF OUTPUTS WHEN THE DEFAULT OUTPUT FORMAT IS NOT A PDF 9217965 - REPORT HISTORY TAKES LONG TIME TO RENDER THE PAGE 9236674 - BI PUBLISHER PARAMETERS DO NOT CASCADE REFRESH AFTER SECOND PARAMETER 9283933 - OPTION TO COLLAPSE PDF OUTPUT BOOKMARKS BY DEFAULT 9287245 - SAVE COMPLETED SCHEDULED REPORTS IN ITS REPORT NAME AND NOT IN A GENERIC NAME 9348862 - ADD FEATURE TO DISABLE THE XSLT1.0-COMPATIBILITY IN RTF TEMPLATE 9355897 - ER: NEED A SAFE DIVIDE FUNCTION 9364169 - UIX 2.3.6.6 PATCH UPTAKE FOR 10.1.3.4.1 9365153 - LEADING WHITESPACE CHARACTERS IN A FIELD TRIMMED WHEN RUN VIEW OR EXPORT TO .CSV 9389039 - LONG TEXT IS NOT WRAPPED PROPERLY IN THE AUTOSHAPE ON RTF TEMPLATE 9475697 - ENH: SUB-TEMPLATE:DYNAMIC VARIABLE WITH PARAMETER VALUE IN CALL-TEMPLATE CLAUSE 9484549 - CHANGE DEFAULT FOR "XSLT1.0-COMPATIBILITY" TO FALSE FOR 10G 9508499 - UNABLE TO READ EXCEL FILE IF MORE THAN 1800 ROWS GENERATED 9546078 - EMAIL DELIVERY INFORMATION SHOULD NOT BE SAVED AND AUTO-FED IN JOB SUBMISSION 9546101 - EXCEPTION OCCURS WHEN SFTP/FTP REMOTE FILENAME DOSE NOT CONTAIN A SLASH '/' 9546117 - SFTP REPORT DELIVERY FAILS WITH NO CLASS DEF FOUND EXCEPTION ON WEBLOGIC 9.2 Following bugs are included in 10.1.3.4.1 and they are only applied to 10.1.3.4.0. 4612604 - FROM EDGE ATTRIBUTE OF HEADER AND FOOTER IS NOT PRESERVED 6621006 - PARAMNAMEVALUE ELEMENT DEFINITION SHOULD HAVE PARAMETER TYPE 6811967 - DATE PARAMETER NOT HANDLING DATE OFFSET WHEN PASSED UPPERCASE Z FOR OFFSET 6864451 - WHEN BIP REPORTS TIMEOUT, THE PROCESS TO LOG BACK IN IS NOT USER FRIENDLY 6869887 - FUSION CURRENCY BRD:4.1.4/4.1.6 OVERRIDINDG MASK /W XSLT._XDOCURMASKS /W SYMBOL 6959078 - "TEXT FIELD CONTAINS COMMA-SEPARATED VALUES" DOESN'T WORK IN CASE OF STRING 6994647 - GETTING ERROR MESSAGE SAYING JOB FAILED EVEN THOUGH WORKS OK IN BI PUBLISHER 7133143 - ENABLE USER TO ENTER 'TODAY' AS VALUE TO DATE PARAMETER IN SCHEDULE REPORT UI 7165117 - QA_BIP_FUNC:-CLOSED LIFE TIME REPORT ERROR MESSAGE IN CMD 7167068 - LEADER-LENGTH OR RULE-THICKNESS PROPRTY IS TOO LARGE 7219517 - NEED EXTENSION FUNCTIONS TO URL ENCODE TEXT STRING. 7269228 - TEMPLATEHELPER PRODUCES A GARBLED OUTPUT WHEN INVOKED BY MULTIPLE THREADS 7276813 - GETREPORTPARAMETERSRETURN ELEMENT SHOULD HAVE DEFAULT VALUE 7279046 - SCHDEULER:UNABLE TO DELETE A JOB USING API 7280336 - ER: BI PUBLISHER - SITEMINDER SUPPORT - GENERIC NON-ORACLE SSO SUPPORT 7281468 - MODIFY SQL SERVER PROPERTIES TO USE HYP DATA DIRECT IN JDBCDEFAULTS.XML 7281495 - PLEASE ADD SUPPORTED DBS TO JDBCDEFAULT.XML AND LIST EACH DB VERSION SEPARATELY 7282456 - FUSION CURRENCY BRD 4.1.9.2: CURRENCY AMOUTS SHOULD NOT BE WRAPPED. MINUS SIGN 7282507 - FUSION CURRENCY BRD4.1.2.5:DISPLAY CURRENCY AND LOCALE DERIVED CURRENCY SYMBOL 7284780 - FUSION CURRENCY BRD 4.1.12.4 CORRECTLY ALIGN NEGATIVE CURRENCY AMOUNTS 7306874 - OPP ERROR - JAVA.LANG.OUTOFMEMORYERROR: ZIP002:OUTOFMEMORYERROR, MEM_ERROR 7309596 - SIEBELCRM: BIP ENHANCEMENT REQUEST FOR SIEBEL PARAMETERIZATION 7337173 - UI LOCALE IS ALWAYS REWRITTEN TO EN WHEN MOVE FROM DASHBOARD 7338349 - REG:ANALYZER REPORT WITH AVERAGE FUNCTION FAIL TO RUN FOR NON INTERACTIVE FORMAT 7343757 - OUTPUT FORMAT OF TEMPLATES IS NOT SAVING 7345989 - SET XDK REPLACEILLEGALCHARS AND ENHANCE XSLTWRAPPER WARNING 7354775 - UNEXPECTED BEHAVIOR OF LAYOUT TEMPLATE PARAMETER OF RUNREPORT WEBSERVICES API 7354798 - SEQUENCE ORDER OF PARAMETERS FOR THE RUNREPORT WEBSERVICES API 7358973 - PARALLEL SFTP DELIVERY FAILS DUE TO SSHEXCEPTION: CORRUPT MAC ON INPUT 7370110 - REGN:FAIL WHEN USE JNDI TO XMLDB REPORT REPOSITORY 7375859 - NEW WEBSERVICE REQUIRED FOR RUNREPORT 7375892 - REQUIRE NEW WEBSERVICE TO CHECK IF REPORTFOLDER EXISTS 7377686 - TEXT-ALIGN NOT APPLIED IN PDF IN HEBREW LOCALE 7413722 - RUNREPORT API DOES NOT PASS BACK ANY GENERATED EXCEPTIONS TO SCHEDULEREPORT 7435420 - FUSION CURRENCY: SUPPORT MICROSOFT(JAVA) FORMAT MASK WITH CURRENCY 7441486 - ER: ADD PARAMETER FOR SFTP TO BURSTING QUERY 7458169 - SSO WITH OID LDAP COULD NOT FETCH OID ROLES 7461161 - EMAIL DELIVERY FAILS - DELIVERYEXCEPTION: 0 BYTE AVAILABLE IN THE GIVEN INPU 7580715 - INCORRECT FORMATTING OF DATES IN TIMEZONE GMT+13 7582694 - INVALID MAXWIDTH VALUE CAUSES NLS FAILURES 7583693 - JAVA.LANG.NULLPOINTEREXCEPTION RAISED WHEN GENERATING HRMS BENEFITS PDF REPORT 7587998 - NEWLY CREATED USERS IN OID CANT ACCESS REPORTS UNTILL BI PUBLISHER IS RESTARTED 7588317 - TABLE OF CONTENT ALWAYS IN THE SAME FONT 7590084 - REMOVING THE BIP ENTERPRISE BANNER BUT KEEPING THE REPORTS & SCHEDULES TAB 7590112 - SOMEONE NOT PRIVILEGED ACCESS BIP DIRECTLY SHOULD GET A CUSTOM PAGE 7590125 - AUTOMATING CREATION OF USERS AND ROLES 7597902 - TIMEZONE SUPPORT IN RUNREPORT WEBSERVICE API 7599031 - XML PUBLISHER SUM(CURRENT-GROUP()) FAILS 7609178 - ISSUE WITH TAGS EXTRACTED FROM RTF TEMPLATE 7613024 - HEADER/FOOTER SETTINGS OF RTF TEMPLATE ARE NOT RETAINING IN THE RTF OUTPUT 7623988 - ADD XSLT FUNCTION TO PRINT XDO PROPERTIES 7625975 - RETRIEVING PARAMETER LOV FROM RTF TEMPLATE 7629445 - SPELL OUT A NUMBER INTO WORDS 7641827 - ANALYTICS FROZED AFTER PAGE TAB WHICH INCLUDES [BI PUBLISHER REPORT] WERE CLICKE 7645504 - BIP REPORT FROZED AFTER THE SAME DASHBOARD BIP REPORTS WERE CLICKED SIMULTANEOUS 7649561 - RECEIVE 'TO MANY OPEN FILE HANDLES' ERROR CAUSING BI TO CRASH 7654155 - BIP REMOVES THE FIRST FILE SEPARATOR WHEN RE-ENTER REPOSITORY LOCATION IN ADMIN 7656834 - NEED AN OPTION TO NOT APPEND SCHEMA NAME IN GENERATED QUERY 7660292 - ER: XDOPARSER UPGRADE TO XDK 11G 7687862 - BIP DATA EXTRACTING ENHANCEMENT FOR SIEBEL BIP INTEGRATION 7694875 - ADMINISTRATOR IS SUPER USER WHETHER CONFIGURED MANDATORY_USER_ROLE OR NOT 7697592 - BI PUBLISHER STRINGINDEXOUTOFBOUNDSEXCEPTION WHEN PRINTING LABEL FROM SIM 7702372 - ARABIC/ENGLISH NUMBER/DATE PROBLEM, TOTAL PAGE NUMBER NOT RENDERED IN ENGLISH 7707987 - OUTOFMEMORY BURSTING A BI PUBLISHER REPORT BI SERVER DATA SOURCE 7712026 - ER: CHANGE CHART OUTPUT FORMAT TO PNG IN HTML OUTPUT 7833732 - THE 'SEARCH' PARAMETER TYPE CANNOT BE USED IN IE6 UNDER WINDOWS 8214839 - ER: INCREASE COLUMN SIZE IN SCHEDULER TABLE XMLP_SCHED_JOB 8218271 - ISSUES WHILE CONVERTING EXCEL TO XML 8218452 - BI PUBLISHER STANDALONE : GRAPHICS WITHOUT COLORS IF MORE THAN 33 PAGES 8250980 - USER WITH XMLP_ADMIN RESPONSIBILITY IS NOT ABLE TO EDIT REPORT IN BIP 8262410 - IMPOSSIBLE TO PRINT PDF CREATED BY BI PUBLISHER VIA 3RD PARTY PDF APPLICATION 8274369 - QA: CANNOT DELETE EMAIL SERVER UNDER DELIVERY CONFIGURATION 8284173 - FO:VISIBILITY="HIDDEN" DOESN'T WORK WITH FO:PAGE-NUMBER-CITATION 8288421 - THE VALUE OF VIEW BY GO BACK TO MY HISTORY IN SCHEDULES TAB 8299212 - REG: THE SPECIFICAL BI USER DIDN'T GET THE CORRECT REPORT HISTORY 8301767 - ORA-01795 ERROR OCCURED AFTER ACCESSING DASHBOARD PAGE WHICH INCLUDES BIP 8304944 - ADD SIEBEL SECURITY MODEL IN BI PUBLISHER 10.1.3.4.1 8312814 - QA:HOT:OBI SERVER JDBC DRIVER BIJDBC14.JAR IN XMLPSERVER.WAR IS INCORRECT 8323679 - BI PUBLISHER SENDS HTML REPORT TO OUTLOOK CLIENT AS ATTACHMENT NOT INLINE 8370794 - HISTORY OF COMPLETED SCHEDULER JOBS STILL SHOW ONE AS RUNNING ON CLUSTER ENV 8390970 - OUT OF MEMORY EXCEPTION RAISED, WHILE SAVING THE DATA 8393681 - CHECKBOX IS SHOWING UP AS CHECKED WHEN DATA IS NOT CHECKED VALUE 8725450 - UIX 2.3.6.6 UPTAKE FOR 10.1.3.4.1 UIX fixes: 6866363 - SUPPORT FOR JAVA DATE FORMAT AS PER JDK 1.4 AND ABOVE 6829124 - DATE PARAMETER NOT HANDLING DATE OFFSET AS PER JAVA STANDARDS ---------------------------- INSTALLATION FOR ENTERPRISE ---------------------------- Upgrade from 10.1.3.4.0d (patch 8284524, 8398280) and 10.1.3.4.1 does not require step 8 and step 9. 1 - Make a backup copy of the xmlp-server-config.xml file located in <application installation>/WEB-INF/ directory, where your application server unpacked the BI Publisher war or ear file. Example: In an Oracle AS/OC4J 10.1.3 deployment, the location is <ORACLE_HOME>/j2ee/home/applications/xmlpserver/xmlpserver/WEB-INF/xmlp-server-config.xml 2 - Back up all the directories under the BI Publisher repository (for example: {Oracle_Home}/xmlp/XMLP). 3 - If you are using Scheduling, back up your existing BI Publisher Scheduler schema. 4 - Shut down BI Publisher. 5 - Undeploy the BI Publisher application ("xmlpserver") from your J2EE application server. See your application server documentation for instructions how to undeploy an application. 6 - Deploy the 10.1.3.4 xmlpserver.ear or xmlpserver.war to your application server. See "Manually Installing BI Publisher to Your J2EE Application Server" secition of BI Publisher Installation Guide for guidelines for your application server type. 7 - Copy the saved backup copy of the xmlp-server-config.xml file from step 1 to the newly created BI Publisher <application installation>/WEB-INF/ directory, where your application server unpacked the BI Publisher war or ear file. Example: In an Oracle AS/OC4J 10.1.3 deployment, the location is <ORACLE_HOME>/j2ee/home/applications/xmlpserver/xmlpserver/WEB-INF/xmlp-server-config.xml 8 - Copy ssodefaults.xml to the following directory. And replace [host]:[port] with your server's information. Default values for other properties can be updated depending on your configuration. <Existing Repository>\XMLP\Admin\Security 9 - Copy database-config.xml to the following directory. <Existing Repository>\XMLP\Admin\Scheduler 10 - Restart xmlpserver application or Application Server ---------------------------------- IBM WEBSPHERE 6.1 DEPLOYMENT NOTE ---------------------------------- When users fail to log on to BI Publisher with "HTTP 500 Internal Server Error" on WebSphere 6.1, you must change Class Loader configuration to avoid the error. (bug7506253 - XMLPSERVER WON'T START AFTER DEPLOYMENT TO WEBSPHERE 6.1) SystemErr.log: java.lang.VerifyError: class loading constraint violated (class: oracle/xml/parser/v2/XMLNode method: xdkSetQxName(Loracle/xml/util/QxName;)V) at pc: 0 .... Class Loader Configuration Steps: 1 - Login to WebSphere Admin console. Click Enterprise Applications under Applications menu 2 - Click xmlpserver application name from the list 3 - Select "Class loading and update detection" 4 - Update class loader configuration as follows in Class Loader -> General Properties * Polling interval for updated files: [0] Seconds * Class loader order: [x] Classes loaded with application class loader first * WAR class loader policy: [x] Single class loader for application 5 - Apply this change and save the new configuration. 6 - Restart xmlpserver application Please refer to WebSphere 6.1 documentation for more details. "http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.base.doc/info/aes/ae/trun_classload_entapp.html"> http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.base.doc/info/aes/ae/trun_classload_entapp.html ------------------------------------------------------- Oracle WebLogic Server 11g R1 (10.3.1) Deployment NOTE ------------------------------------------------------- If you are deploying BI Publisher to WebLogic Server 10.3.1, you must add the following setting at startup for the domain that contains the BI Publisher server in the /weblogic_home/user_projects/domains/base_domain/bin/startWebLogic.sh script : -Dtoplink.xml.platform=oracle.toplink.platform.xml.jaxp.JAXPPlatform This setting is required to enable BI Publisher to find the TopLink JAR files to create the Scheduler tables.

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 3 – Table per Concrete Type (TPC) and Choosing Strategy Guidelines

    - by mortezam
    This is the third (and last) post in a series that explains different approaches to map an inheritance hierarchy with EF Code First. I've described these strategies in previous posts: Part 1 – Table per Hierarchy (TPH) Part 2 – Table per Type (TPT)In today’s blog post I am going to discuss Table per Concrete Type (TPC) which completes the inheritance mapping strategies supported by EF Code First. At the end of this post I will provide some guidelines to choose an inheritance strategy mainly based on what we've learned in this series. TPC and Entity Framework in the Past Table per Concrete type is somehow the simplest approach suggested, yet using TPC with EF is one of those concepts that has not been covered very well so far and I've seen in some resources that it was even discouraged. The reason for that is just because Entity Data Model Designer in VS2010 doesn't support TPC (even though the EF runtime does). That basically means if you are following EF's Database-First or Model-First approaches then configuring TPC requires manually writing XML in the EDMX file which is not considered to be a fun practice. Well, no more. You'll see that with Code First, creating TPC is perfectly possible with fluent API just like other strategies and you don't need to avoid TPC due to the lack of designer support as you would probably do in other EF approaches. Table per Concrete Type (TPC)In Table per Concrete type (aka Table per Concrete class) we use exactly one table for each (nonabstract) class. All properties of a class, including inherited properties, can be mapped to columns of this table, as shown in the following figure: As you can see, the SQL schema is not aware of the inheritance; effectively, we’ve mapped two unrelated tables to a more expressive class structure. If the base class was concrete, then an additional table would be needed to hold instances of that class. I have to emphasize that there is no relationship between the database tables, except for the fact that they share some similar columns. TPC Implementation in Code First Just like the TPT implementation, we need to specify a separate table for each of the subclasses. We also need to tell Code First that we want all of the inherited properties to be mapped as part of this table. In CTP5, there is a new helper method on EntityMappingConfiguration class called MapInheritedProperties that exactly does this for us. Here is the complete object model as well as the fluent API to create a TPC mapping: public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } }          public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } }          public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } }      public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; }              protected override void OnModelCreating(ModelBuilder modelBuilder)     {         modelBuilder.Entity<BankAccount>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("BankAccounts");         });         modelBuilder.Entity<CreditCard>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("CreditCards");         });                 } } The Importance of EntityMappingConfiguration ClassAs a side note, it worth mentioning that EntityMappingConfiguration class turns out to be a key type for inheritance mapping in Code First. Here is an snapshot of this class: namespace System.Data.Entity.ModelConfiguration.Configuration.Mapping {     public class EntityMappingConfiguration<TEntityType> where TEntityType : class     {         public ValueConditionConfiguration Requires(string discriminator);         public void ToTable(string tableName);         public void MapInheritedProperties();     } } As you have seen so far, we used its Requires method to customize TPH. We also used its ToTable method to create a TPT and now we are using its MapInheritedProperties along with ToTable method to create our TPC mapping. TPC Configuration is Not Done Yet!We are not quite done with our TPC configuration and there is more into this story even though the fluent API we saw perfectly created a TPC mapping for us in the database. To see why, let's start working with our object model. For example, the following code creates two new objects of BankAccount and CreditCard types and tries to add them to the database: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount();     CreditCard creditCard = new CreditCard() { CardType = 1 };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Running this code throws an InvalidOperationException with this message: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. The reason we got this exception is because DbContext.SaveChanges() internally invokes SaveChanges method of its internal ObjectContext. ObjectContext's SaveChanges method on its turn by default calls AcceptAllChanges after it has performed the database modifications. AcceptAllChanges method merely iterates over all entries in ObjectStateManager and invokes AcceptChanges on each of them. Since the entities are in Added state, AcceptChanges method replaces their temporary EntityKey with a regular EntityKey based on the primary key values (i.e. BillingDetailId) that come back from the database and that's where the problem occurs since both the entities have been assigned the same value for their primary key by the database (i.e. on both BillingDetailId = 1) and the problem is that ObjectStateManager cannot track objects of the same type (i.e. BillingDetail) with the same EntityKey value hence it throws. If you take a closer look at the TPC's SQL schema above, you'll see why the database generated the same values for the primary keys: the BillingDetailId column in both BankAccounts and CreditCards table has been marked as identity. How to Solve The Identity Problem in TPC As you saw, using SQL Server’s int identity columns doesn't work very well together with TPC since there will be duplicate entity keys when inserting in subclasses tables with all having the same identity seed. Therefore, to solve this, either a spread seed (where each table has its own initial seed value) will be needed, or a mechanism other than SQL Server’s int identity should be used. Some other RDBMSes have other mechanisms allowing a sequence (identity) to be shared by multiple tables, and something similar can be achieved with GUID keys in SQL Server. While using GUID keys, or int identity keys with different starting seeds will solve the problem but yet another solution would be to completely switch off identity on the primary key property. As a result, we need to take the responsibility of providing unique keys when inserting records to the database. We will go with this solution since it works regardless of which database engine is used. Switching Off Identity in Code First We can switch off identity simply by placing DatabaseGenerated attribute on the primary key property and pass DatabaseGenerationOption.None to its constructor. DatabaseGenerated attribute is a new data annotation which has been added to System.ComponentModel.DataAnnotations namespace in CTP5: public abstract class BillingDetail {     [DatabaseGenerated(DatabaseGenerationOption.None)]     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } As always, we can achieve the same result by using fluent API, if you prefer that: modelBuilder.Entity<BillingDetail>()             .Property(p => p.BillingDetailId)             .HasDatabaseGenerationOption(DatabaseGenerationOption.None); Working With The Object Model Our TPC mapping is ready and we can try adding new records to the database. But, like I said, now we need to take care of providing unique keys when creating new objects: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount()      {          BillingDetailId = 1                          };     CreditCard creditCard = new CreditCard()      {          BillingDetailId = 2,         CardType = 1     };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Polymorphic Associations with TPC is Problematic The main problem with this approach is that it doesn’t support Polymorphic Associations very well. After all, in the database, associations are represented as foreign key relationships and in TPC, the subclasses are all mapped to different tables so a polymorphic association to their base class (abstract BillingDetail in our example) cannot be represented as a simple foreign key relationship. For example, consider the the domain model we introduced here where User has a polymorphic association with BillingDetail. This would be problematic in our TPC Schema, because if User has a many-to-one relationship with BillingDetail, the Users table would need a single foreign key column, which would have to refer both concrete subclass tables. This isn’t possible with regular foreign key constraints. Schema Evolution with TPC is Complex A further conceptual problem with this mapping strategy is that several different columns, of different tables, share exactly the same semantics. This makes schema evolution more complex. For example, a change to a base class property results in changes to multiple columns. It also makes it much more difficult to implement database integrity constraints that apply to all subclasses. Generated SQLLet's examine SQL output for polymorphic queries in TPC mapping. For example, consider this polymorphic query for all BillingDetails and the resulting SQL statements that being executed in the database: var query = from b in context.BillingDetails select b; Just like the SQL query generated by TPT mapping, the CASE statements that you see in the beginning of the query is merely to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type). TPC's SQL Queries are Union Based As you can see in the above screenshot, the first SELECT uses a FROM-clause subquery (which is selected with a red rectangle) to retrieve all instances of BillingDetails from all concrete class tables. The tables are combined with a UNION operator, and a literal (in this case, 0 and 1) is inserted into the intermediate result; (look at the lines highlighted in yellow.) EF reads this to instantiate the correct class given the data from a particular row. A union requires that the queries that are combined, project over the same columns; hence, EF has to pad and fill up nonexistent columns with NULL. This query will really perform well since here we can let the database optimizer find the best execution plan to combine rows from several tables. There is also no Joins involved so it has a better performance than the SQL queries generated by TPT where a Join is required between the base and subclasses tables. Choosing Strategy GuidelinesBefore we get into this discussion, I want to emphasize that there is no one single "best strategy fits all scenarios" exists. As you saw, each of the approaches have their own advantages and drawbacks. Here are some rules of thumb to identify the best strategy in a particular scenario: If you don’t require polymorphic associations or queries, lean toward TPC—in other words, if you never or rarely query for BillingDetails and you have no class that has an association to BillingDetail base class. I recommend TPC (only) for the top level of your class hierarchy, where polymorphism isn’t usually required, and when modification of the base class in the future is unlikely. If you do require polymorphic associations or queries, and subclasses declare relatively few properties (particularly if the main difference between subclasses is in their behavior), lean toward TPH. Your goal is to minimize the number of nullable columns and to convince yourself (and your DBA) that a denormalized schema won’t create problems in the long run. If you do require polymorphic associations or queries, and subclasses declare many properties (subclasses differ mainly by the data they hold), lean toward TPT. Or, depending on the width and depth of your inheritance hierarchy and the possible cost of joins versus unions, use TPC. By default, choose TPH only for simple problems. For more complex cases (or when you’re overruled by a data modeler insisting on the importance of nullability constraints and normalization), you should consider the TPT strategy. But at that point, ask yourself whether it may not be better to remodel inheritance as delegation in the object model (delegation is a way of making composition as powerful for reuse as inheritance). Complex inheritance is often best avoided for all sorts of reasons unrelated to persistence or ORM. EF acts as a buffer between the domain and relational models, but that doesn’t mean you can ignore persistence concerns when designing your classes. SummaryIn this series, we focused on one of the main structural aspect of the object/relational paradigm mismatch which is inheritance and discussed how EF solve this problem as an ORM solution. We learned about the three well-known inheritance mapping strategies and their implementations in EF Code First. Hopefully it gives you a better insight about the mapping of inheritance hierarchies as well as choosing the best strategy for your particular scenario. Happy New Year and Happy Code-Firsting! References ADO.NET team blog Java Persistence with Hibernate book a { color: #5A99FF; } a:visited { color: #5A99FF; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } .exception { background-color: #f0f0f0; font-style: italic; padding-bottom: 5px; padding-left: 5px; padding-top: 5px; padding-right: 5px; }

    Read the article

  • Using an alternate search platform in Commerce Server 2009

    - by Lewis Benge
    Although Microsoft Commerce Server 2009's architecture is built upon Microsoft SQL Server, and has the full power of the SQL Full Text Indexing Search Platform, there are time however when you may require a richer or alternate search platform. One of these scenarios if when you want to implement a faceted (refinement) search into your site, which provides dynamic refinements based on the search results dataset. Faceted search is becoming popular in most online retail environments as a way of providing an enhanced user experience when browsing a larger catalogue. This is powerful for two reasons, firstly with a traditional search it is down to a user to think of a search term suitable for the product they are trying to find. This typically will not return similar products or help in any way to refine a larger dataset. Faceted searches on the other hand provide a comprehensive list of product properties, grouped together by similarity to help the user narrow down the results returned, as the user progressively restricts the search criteria by selecting additional criteria to search again, these facets needs to continually refresh. The whole experience allows users to explore alternate brands, price-ranges, or find products they hadn't initially thought of or where looking for in a bid to enhance cross sell in the retail environment. The second advantage of this type of search from a business perspective is also to harvest the search result to start to profile your user. Even though anonymous users may routinely visit your site, and will not necessarily register or complete a transaction to build up marketing data- profiling, you can still achieve the same result by recording search facets used within the search sequence. Below is a faceted search scenario generated from eBay using the search term "server". By creating a search profile of clicking through Computer & Networking -> Servers -> Dell - > New and recording this information against my user profile you can start to predict with a lot more certainty what types of products I am interested in. This will allow you to apply shopping-cart analysis against your search data and provide great cross-sale or advertising opportunity, or personalise the user experience based on your prediction of what the user may be interested in. This type of search is extremely beneficial in e-Commerce environments but achieving it out of the box with Commerce Server and SQL Full Text indexing can be challenging. In many deployments it is often easier to use an alternate search platform such as Microsoft's FAST, Apache SOLR, or Endecca, however you still want these products to integrate natively into Commerce Server to ensure that up-to-date inventory information is presented, profile information is generated, and you provide a consistant API. To do so we make the most of the Commerce Server extensibilty points called operation sequence components. In this example I will be talking about Apache Solr hosted on Apache Tomcat, in this specific example I have used the SolrNet C# library to interface to the Java platform. Also I am not going to talk about Solr configuration of indexing – but in a production envionrment this would typically happen by using Powershell to call the Commerce Server management webservice to export your catalog as XML, apply an XSLT transform to the file to make it conform to SOLR and use a simple HTTP Post to send it to the search enginge for indexing. Essentially a sequance component is a step in a serial workflow used to call a data repository (which in most cases is usually the Commerce Server pipelines or databases) and map to and from a Commerce Entity object whilst enforcing any business rules. So the first step in the process is to add a new class library to your existing Commerce Server site. You will need to use a new library as Sequence Components will need to be strongly named to be deployed. Once you are inside of your new project, add a new class file and add a reference to the Microsoft.Commerce.Providers, Microsoft.Commerce.Contracts and the Microsoft.Commerce.Broker assemblies. Now make your new class derive from the base object Microsoft.Commerce.Providers.Components.OperationSequanceComponent and overide the ExecuteQueryMethod. Your screen will then look something similar ot this: As all we are doing on this component is conducting a search we are only interested in the ExecuteQuery method. This method accepts three arguments, queryOperation, operationCache, and response. The queryOperation will be the object in which we receive our search parameters, the cache allows access to the Commerce Server cache allowing us to store regulary accessed information, and the response object is the object which we will return the result of our search upon. Inside this method is simply where we are going to inject our logic for our third party search platform. As I am not going to explain the inner-workings of actually making a SOLR call, I'll simply provide the sample code here. I would highly recommend however looking at the SolrNet wiki as they have some great explinations of how the API works. What you will find however is that there are some further extensions required when attempting to integrate a custom search provider. Firstly you out of the box the CommerceQueryOperation you will receive into the method when conducting a search against a catalog is specifically geared towards a SQL Full Text Search with properties such as a Where clause. To make the operation you receive more relevant you will need to create another class, this time derived from Microsoft.Commerce.Contract.Messages.CommerceSearchCriteria and within this you need to detail the properties you will require to allow you to submit as parameters to the SOLR search API. My exmaple looks like this: [DataContract(Namespace = "http://schemas.microsoft.com/microsoft-multi-channel-commerce-foundation/types/2008/03")] public class CommerceCatalogSolrSearch : CommerceSearchCriteria { private Dictionary<string, string> _facetQueries;   public CommerceCatalogSolrSearch() { _facetQueries = new Dictionary<String, String>();   }     public Dictionary<String, String> FacetQueries { get { return _facetQueries; } set { _facetQueries = value; } }   public String SearchPhrase{ get; set; } public int PageIndex { get; set; } public int PageSize { get; set; } public IEnumerable<String> Facets { get; set; }   public string Sort { get; set; }   public new int FirstItemIndex { get { return (PageIndex-1)*PageSize; } }   public int LastItemIndex { get { return FirstItemIndex + PageSize; } } }  To allow you to construct a CommerceQueryOperation call within the API you will also need to construct another class to derived from Microsoft.Commerce.Common.MessageBuilders.CommerceSearchCriteriaBuilder and is simply used to construct an instance of the CommerceQueryOperation you have just created and expose the properties you want set. My Message builder looks like this: public class CommerceCatalogSolrSearchBuilder : CommerceSearchCriteriaBuilder { private CommerceCatalogSolrSearch _solrSearch;   public CommerceCatalogSolrSearchBuilder() { _solrSearch = new CommerceCatalogSolrSearch(); }   public String SearchPhrase { get { return _solrSearch.SearchPhrase; } set { _solrSearch.SearchPhrase = value; } }   public int PageIndex { get { return _solrSearch.PageIndex; } set { _solrSearch.PageIndex = value; } }   public int PageSize { get { return _solrSearch.PageSize; } set { _solrSearch.PageSize = value; } }   public Dictionary<String,String> FacetQueries { get { return _solrSearch.FacetQueries; } set { _solrSearch.FacetQueries = value; } }   public String[] Facets { get { return _solrSearch.Facets.ToArray(); } set { _solrSearch.Facets = value; } } public override CommerceSearchCriteria ToSearchCriteria() { return _solrSearch; } }  Once you have these two classes in place you can now safely cast the CommerceOperation you receive as an argument of the overidden ExecuteQuery method in the SequenceComponent to the CommerceCatalogSolrSearch operation you have just created, e.g. public CommerceCatalogSolrSearch TryGetSearchCriteria(CommerceOperation operation) { var searchCriteria = operation as CommerceQueryOperation; if (searchCriteria == null) throw new Exception("No search criteria present");   var local = (CommerceCatalogSolrSearch) searchCriteria.SearchCriteria; if (local == null) throw new Exception("Unexpected Search Criteria in Operation");   return local; }  Now you have all of your search parameters present, you can go off an call the external search platform API. You will of-course get proprietry objects returned, so the next step in the process is to convert the results being returned back into CommerceEntities. You do this via another extensibility point within the Commerce Server API called translatators. Translators are another separate class, this time derived inheriting the interface Microsoft.Commerce.Providers.Translators.IToCommerceEntityTranslator . As you can imaginge this interface is specific for the conversion of the object TO a CommerceEntity, you will need to implement a separate interface if you also need to go in the opposite direction. If you implement the required method for the interace you will get a single translate method which has a source onkect, destination CommerceEntity, and a collection of properties as arguments. For simplicity sake in this example I have hard-coded the mappings, however best practice would dictate you map the objects using your metadatadefintions.xml file . Once complete your translator would look something like the following: public class SolrEntityTranslator : IToCommerceEntityTranslator { #region IToCommerceEntityTranslator Members   public void Translate(object source, CommerceEntity destinationCommerceEntity, CommercePropertyCollection propertiesToReturn) { if (source.GetType().Equals(typeof (SearchProduct))) { var searchResult = (SearchProduct) source;   destinationCommerceEntity.Id = searchResult.ProductId; destinationCommerceEntity.SetPropertyValue("DisplayName", searchResult.Title); destinationCommerceEntity.ModelName = "Product";   } }  Once you have a translator in place you can then safely map the results of your search platform into Commerce Entities and attach them on to the CommerceResponse object in a fashion similar to this: foreach (SearchProduct result in matchingProducts) { var destinationEntity = new CommerceEntity(_returnModelName);   Translator.ToCommerceEntity(result, destinationEntity, _queryOperation.Model.Properties); response.CommerceEntities.Add(destinationEntity); }  In SOLR I actually have two objects being returned – a product, and a collection of facets so I have an additional translator for facet (which maps to a custom facet CommerceEntity) and my facet response from SOLR is passed into the Translator helper class seperatley. When all of this is pieced together you have sucessfully completed the extensiblity point coding. You would have created a new OperationSequanceComponent, a custom SearchCritiera object and message builder class, and translators to convert the objects into Commerce Entities. Now you simply need to configure them, and can start calling them in your code. Make sure you sign you assembly, compile it and identiy its signature. Next you need to put this a reference of your new assembly into the Channel.Config configuration file replacing that of the existing SQL Full Text component: You will also need to add your translators to the Translators node of your Channel.Config too: Lastly add any custom CommerceEntities you have developed to your MetaDataDefintions.xml file. Your configuration is now complete, and you should now be able to happily make a call to the Commerce Foundation API, which will act as a proxy to your third party search platform and return back CommerceEntities of your search results. If you require data to be enriched, or logged, or any other logic applied then simply add further sequence components into the OperationSequence (obviously keeping the search response first) to the node of your Channel.Config file. Now to call your code you simply request it as per any other CommerceQuery operation, but taking into account you may be receiving multiple types of CommerceEntity returned: public KeyValuePair<FacetCollection ,List<Product>> DoFacetedProductQuerySearch(string searchPhrase, string orderKey, string sortOrder, int recordIndex, int recordsPerPage, Dictionary<string, string> facetQueries, out int totalItemCount) { var products = new List<Product>(); var query = new CommerceQuery<CatalogEntity, CommerceCatalogSolrSearchBuilder>();   query.SearchCriteria.PageIndex = recordIndex; query.SearchCriteria.PageSize = recordsPerPage; query.SearchCriteria.SearchPhrase = searchPhrase; query.SearchCriteria.FacetQueries = facetQueries;     totalItemCount = 0; CommerceResponse response = SiteContext.ProcessRequest(query.ToRequest()); var queryResponse = response.OperationResponses[0] as CommerceQueryOperationResponse;   // No results. Return the empty list if (queryResponse != null && queryResponse.CommerceEntities.Count == 0) return new KeyValuePair<FacetCollection, List<Product>>();   totalItemCount = (int)queryResponse.TotalItemCount;   // Prepare a multi-operation to retrieve the product variants var multiOperation = new CommerceMultiOperation();     //Add products to results foreach (Product product in queryResponse.CommerceEntities.Where(x => x.ModelName == "Product")) { var productQuery = new CommerceQuery<Product>(Product.ModelNameDefinition); productQuery.SearchCriteria.Model.Id = product.Id; productQuery.SearchCriteria.Model.CatalogId = product.CatalogId;   var variantQuery = new CommerceQueryRelatedItem<Variant>(Product.RelationshipName.Variants);   productQuery.RelatedOperations.Add(variantQuery);   multiOperation.Add(productQuery); }   CommerceResponse variantsResponse = SiteContext.ProcessRequest(multiOperation.ToRequest()); foreach (CommerceQueryOperationResponse queryOpResponse in variantsResponse.OperationResponses) { if (queryOpResponse.CommerceEntities.Count() > 0) products.Add(queryOpResponse.CommerceEntities[0]); }   //Get facet collection FacetCollection facetCollection = queryResponse.CommerceEntities.Where(x => x.ModelName == "FacetCollection").FirstOrDefault();     return new KeyValuePair<FacetCollection, List<Product>>(facetCollection, products); }    ..And that is it – simply a few classes and some configuration will allow you to extend the Commerce Server query operations to call a third party search platform, whilst still maintaing a unifed API in the remainder of your code. This logic stands for any extensibility within CommerceServer, which requires excution in a serial fashioon such as call to LOB systems or web service to validate or enrich data. Feel free to use this example on other applications, and if you have any questions please feel free to e-mail and I'll help out where I can!

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • When is a SQL function not a function?

    - by Rob Farley
    Should SQL Server even have functions? (Oh yeah – this is a T-SQL Tuesday post, hosted this month by Brad Schulz) Functions serve an important part of programming, in almost any language. A function is a piece of code that is designed to return something, as opposed to a piece of code which isn’t designed to return anything (which is known as a procedure). SQL Server is no different. You can call stored procedures, even from within other stored procedures, and you can call functions and use these in other queries. Stored procedures might query something, and therefore ‘return data’, but a function in SQL is considered to have the type of the thing returned, and can be used accordingly in queries. Consider the internal GETDATE() function. SELECT GETDATE(), SomeDatetimeColumn FROM dbo.SomeTable; There’s no logical difference between the field that is being returned by the function and the field that’s being returned by the table column. Both are the datetime field – if you didn’t have inside knowledge, you wouldn’t necessarily be able to tell which was which. And so as developers, we find ourselves wanting to create functions that return all kinds of things – functions which look up values based on codes, functions which do string manipulation, and so on. But it’s rubbish. Ok, it’s not all rubbish, but it mostly is. And this isn’t even considering the SARGability impact. It’s far more significant than that. (When I say the SARGability aspect, I mean “because you’re unlikely to have an index on the result of some function that’s applied to a column, so try to invert the function and query the column in an unchanged manner”) I’m going to consider the three main types of user-defined functions in SQL Server: Scalar Inline Table-Valued Multi-statement Table-Valued I could also look at user-defined CLR functions, including aggregate functions, but not today. I figure that most people don’t tend to get around to doing CLR functions, and I’m going to focus on the T-SQL-based user-defined functions. Most people split these types of function up into two types. So do I. Except that most people pick them based on ‘scalar or table-valued’. I’d rather go with ‘inline or not’. If it’s not inline, it’s rubbish. It really is. Let’s start by considering the two kinds of table-valued function, and compare them. These functions are going to return the sales for a particular salesperson in a particular year, from the AdventureWorks database. CREATE FUNCTION dbo.FetchSales_inline(@salespersonid int, @orderyear int) RETURNS TABLE AS  RETURN (     SELECT e.LoginID as EmployeeLogin, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ) ; GO CREATE FUNCTION dbo.FetchSales_multi(@salespersonid int, @orderyear int) RETURNS @results TABLE (     EmployeeLogin nvarchar(512),     OrderDate datetime,     SalesOrderID int     ) AS BEGIN     INSERT @results (EmployeeLogin, OrderDate, SalesOrderID)     SELECT e.LoginID, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ;     RETURN END ; GO You’ll notice that I’m being nice and responsible with the use of the DATEADD function, so that I have SARGability on the OrderDate filter. Regular readers will be hoping I’ll show what’s going on in the execution plans here. Here I’ve run two SELECT * queries with the “Show Actual Execution Plan” option turned on. Notice that the ‘Query cost’ of the multi-statement version is just 2% of the ‘Batch cost’. But also notice there’s trickery going on. And it’s nothing to do with that extra index that I have on the OrderDate column. Trickery. Look at it – clearly, the first plan is showing us what’s going on inside the function, but the second one isn’t. The second one is blindly running the function, and then scanning the results. There’s a Sequence operator which is calling the TVF operator, and then calling a Table Scan to get the results of that function for the SELECT operator. But surely it still has to do all the work that the first one is doing... To see what’s actually going on, let’s look at the Estimated plan. Now, we see the same plans (almost) that we saw in the Actuals, but we have an extra one – the one that was used for the TVF. Here’s where we see the inner workings of it. You’ll probably recognise the right-hand side of the TVF’s plan as looking very similar to the first plan – but it’s now being called by a stack of other operators, including an INSERT statement to be able to populate the table variable that the multi-statement TVF requires. And the cost of the TVF is 57% of the batch! But it gets worse. Let’s consider what happens if we don’t need all the columns. We’ll leave out the EmployeeLogin column. Here, we see that the inline function call has been simplified down. It doesn’t need the Employee table. The join is redundant and has been eliminated from the plan, making it even cheaper. But the multi-statement plan runs the whole thing as before, only removing the extra column when the Table Scan is performed. A multi-statement function is a lot more powerful than an inline one. An inline function can only be the result of a single sub-query. It’s essentially the same as a parameterised view, because views demonstrate this same behaviour of extracting the definition of the view and using it in the outer query. A multi-statement function is clearly more powerful because it can contain far more complex logic. But a multi-statement function isn’t really a function at all. It’s a stored procedure. It’s wrapped up like a function, but behaves like a stored procedure. It would be completely unreasonable to expect that a stored procedure could be simplified down to recognise that not all the columns might be needed, but yet this is part of the pain associated with this procedural function situation. The biggest clue that a multi-statement function is more like a stored procedure than a function is the “BEGIN” and “END” statements that surround the code. If you try to create a multi-statement function without these statements, you’ll get an error – they are very much required. When I used to present on this kind of thing, I even used to call it “The Dangers of BEGIN and END”, and yes, I’ve written about this type of thing before in a similarly-named post over at my old blog. Now how about scalar functions... Suppose we wanted a scalar function to return the count of these. CREATE FUNCTION dbo.FetchSales_scalar(@salespersonid int, @orderyear int) RETURNS int AS BEGIN     RETURN (         SELECT COUNT(*)         FROM Sales.SalesOrderHeader AS o         LEFT JOIN HumanResources.Employee AS e         ON e.EmployeeID = o.SalesPersonID         WHERE o.SalesPersonID = @salespersonid         AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')         AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ); END ; GO Notice the evil words? They’re required. Try to remove them, you just get an error. That’s right – any scalar function is procedural, despite the fact that you wrap up a sub-query inside that RETURN statement. It’s as ugly as anything. Hopefully this will change in future versions. Let’s have a look at how this is reflected in an execution plan. Here’s a query, its Actual plan, and its Estimated plan: SELECT e.LoginID, y.year, dbo.FetchSales_scalar(p.SalesPersonID, y.year) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; We see here that the cost of the scalar function is about twice that of the outer query. Nicely, the query optimizer has worked out that it doesn’t need the Employee table, but that’s a bit of a red herring here. There’s actually something way more significant going on. If I look at the properties of that UDF operator, it tells me that the Estimated Subtree Cost is 0.337999. If I just run the query SELECT dbo.FetchSales_scalar(281,2003); we see that the UDF cost is still unchanged. You see, this 0.0337999 is the cost of running the scalar function ONCE. But when we ran that query with the CROSS JOIN in it, we returned quite a few rows. 68 in fact. Could’ve been a lot more, if we’d had more salespeople or more years. And so we come to the biggest problem. This procedure (I don’t want to call it a function) is getting called 68 times – each one between twice as expensive as the outer query. And because it’s calling it in a separate context, there is even more overhead that I haven’t considered here. The cheek of it, to say that the Compute Scalar operator here costs 0%! I know a number of IT projects that could’ve used that kind of costing method, but that’s another story that I’m not going to go into here. Let’s look at a better way. Suppose our scalar function had been implemented as an inline one. Then it could have been expanded out like a sub-query. It could’ve run something like this: SELECT e.LoginID, y.year, (SELECT COUNT(*)     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = p.SalesPersonID     AND o.OrderDate >= DATEADD(year,y.year-2000,'20000101')     AND o.OrderDate < DATEADD(year,y.year-2000+1,'20000101')     ) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; Don’t worry too much about the Scan of the SalesOrderHeader underneath a Nested Loop. If you remember from plenty of other posts on the matter, execution plans don’t push the data through. That Scan only runs once. The Index Spool sucks the data out of it and populates a structure that is used to feed the Stream Aggregate. The Index Spool operator gets called 68 times, but the Scan only once (the Number of Executions property demonstrates this). Here, the Query Optimizer has a full picture of what’s being asked, and can make the appropriate decision about how it accesses the data. It can simplify it down properly. To get this kind of behaviour from a function, we need it to be inline. But without inline scalar functions, we need to make our function be table-valued. Luckily, that’s ok. CREATE FUNCTION dbo.FetchSales_inline2(@salespersonid int, @orderyear int) RETURNS table AS RETURN (SELECT COUNT(*) as NumSales     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ); GO But we can’t use this as a scalar. Instead, we need to use it with the APPLY operator. SELECT e.LoginID, y.year, n.NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID OUTER APPLY dbo.FetchSales_inline2(p.SalesPersonID, y.year) AS n; And now, we get the plan that we want for this query. All we’ve done is tell the function that it’s returning a table instead of a single value, and removed the BEGIN and END statements. We’ve had to name the column being returned, but what we’ve gained is an actual inline simplifiable function. And if we wanted it to return multiple columns, it could do that too. I really consider this function to be superior to the scalar function in every way. It does need to be handled differently in the outer query, but in many ways it’s a more elegant method there too. The function calls can be put amongst the FROM clause, where they can then be used in the WHERE or GROUP BY clauses without fear of calling the function multiple times (another horrible side effect of functions). So please. If you see BEGIN and END in a function, remember it’s not really a function, it’s a procedure. And then fix it. @rob_farley

    Read the article

  • The case of the phantom ADF developer (and other yarns)

    - by Chris Muir
    A few years of ADF experience means I see common mistakes made by different developers, some I regularly make myself.  This post is designed to assist beginners to Oracle JDeveloper Application Development Framework (ADF) avoid a common ADF pitfall, the case of the phantom ADF developer [add Scooby-Doo music here]. ADF Business Components - triggers, default table values and instead of views. Oracle's JDeveloper tutorials help with the A-B-Cs of ADF development, typically built on the nice 'n safe demo schema provided by with the Oracle database such as the HR demo schema. However it's not too long until ADF beginners, having built up some confidence from learning with the tutorials and vanilla demo schemas, start building ADF Business Components based upon their own existing database schema objects.  This is where unexpected problems can sneak in. The crime Developers may encounter a surprising error at runtime when editing a record they just created or updated and committed to the database, based on their own existing tables, namely the error: JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[x] ...where X is the primary key value of the row at hand.  In a production environment with multiple users this error may be legit, one of the other users has updated the row since you queried it.  Yet in a development environment this error is just plain confusing.  If developers are isolated in their own database, creating and editing records they know other users can't possibly be working with, or all the other developers have gone home for the day, how is this error possible? There are no other users?  It must be the phantom ADF developer! [insert dramatic music here] The following picture is what you'll see in the Business Component Browser, and you'll receive a similar error message via an ADF Faces page: A false conclusion What can possibly cause this issue if it isn't our phantom ADF developer?  Doesn't ADF BC implement record locking, locking database records when the row is modified in the ADF middle-tier by a user?  How can our phantom ADF developer even take out a lock if this is the case?  Maybe ADF has a bug, maybe ADF isn't implementing record locking at all?  Shouldn't we see the error "JBO-26030: Failed to lock the record, another user holds the lock" as we attempt to modify the record, why do we see JBO-25014? : Let's verify that ADF is in fact issuing the correct SQL LOCK-FOR-UPDATE statement to the database. First we need to verify ADF's locking strategy.  It is determined by the Application Module's jbo.locking.mode property.  The default (as of JDev 11.1.1.4.0 if memory serves me correct) and recommended value is optimistic, and the other valid value is pessimistic. Next we need a mechanism to check that ADF is issuing the LOCK statements to the database.  We could ask DBAs to monitor locks with OEM, but optimally we'd rather not involve overworked DBAs in this process, so instead we can use the ADF runtime setting –Djbo.debugoutput=console.  At runtime this options turns on instrumentation within the ADF BC layer, which among a lot of extra detail displayed in the log window, will show the actual SQL statement issued to the database, including the LOCK statement we're looking to confirm. Setting our locking mode to pessimistic, opening the Business Components Browser of a JSF page allowing us to edit a record, say the CHARGEABLE field within a BOOKINGS record where BOOKING_NO = 1206, upon editing the record see among others the following log entries: [421] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[422] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[423] Where binding param 1: 1206  As can be seen on line 422, in fact a LOCK-FOR-UPDATE is indeed issued to the database.  Later when we commit the record we see: [441] OracleSQLBuilder: SAVEPOINT 'BO_SP'[442] OracleSQLBuilder Executing, Lock 1 DML on: BOOKINGS (Update)[443] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[444] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[445] Update binding param 1: N[446] Where binding param 2: 1206[447] BookingsView1 notify COMMIT ... [448] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [449] EntityCache close prepared statement ....and as a result the changes are saved to the database, and the lock is released. Let's see what happens when we use the optimistic locking mode, this time to change the same BOOKINGS record CHARGEABLE column again.  As soon as we edit the record we see little activity in the logs, nothing to indicate any SQL statement, let alone a LOCK has been taken out on the row. However when we save our records by issuing a commit, the following is recorded in the logs: [509] OracleSQLBuilder: SAVEPOINT 'BO_SP'[510] OracleSQLBuilder Executing doEntitySelect on: BOOKINGS (true)[511] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[512] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[513] Where binding param 1: 1205[514] OracleSQLBuilder Executing, Lock 2 DML on: BOOKINGS (Update)[515] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[516] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[517] Update binding param 1: Y[518] Where binding param 2: 1205[519] BookingsView1 notify COMMIT ... [520] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [521] EntityCache close prepared statement Again even though we're seeing the midtier delay the LOCK statement until commit time, it is in fact occurring on line 412, and released as part of the commit issued on line 419.  Therefore with either optimistic or pessimistic locking a lock is indeed issued. Our conclusion at this point must be, unless there's the unlikely cause the LOCK statement is never really hitting the database, or the even less likely cause the database has a bug, then ADF does in fact take out a lock on the record before allowing the current user to update it.  So there's no way our phantom ADF developer could even modify the record if he tried without at least someone receiving a lock error. Hmm, we can only conclude the locking mode is a red herring and not the true cause of our problem.  Who is the phantom? At this point we'll need to conclude that the error message "JBO-25014: Another user has changed" is somehow legit, even though we don't understand yet what's causing it. This leads onto two further questions, how does ADF know another user has changed the row, and what's been changed anyway? To answer the first question, how does ADF know another user has changed the row, the Fusion Guide's section 4.10.11 How to Protect Against Losing Simultaneous Updated Data , that details the Entity Object Change-Indicator property, gives us the answer: At runtime the framework provides automatic "lost update" detection for entity objects to ensure that a user cannot unknowingly modify data that another user has updated and committed in the meantime. Typically, this check is performed by comparing the original values of each persistent entity attribute against the corresponding current column values in the database at the time the underlying row is locked. Before updating a row, the entity object verifies that the row to be updated is still consistent with the current state of the database.  The guide further suggests to make this solution more efficient: You can make the lost update detection more efficient by identifying any attributes of your entity whose values you know will be updated whenever the entity is modified. Typical candidates include a version number column or an updated date column in the row.....To detect whether the row has been modified since the user queried it in the most efficient way, select the Change Indicator option to compare only the change-indicator attribute values. We now know that ADF BC doesn't use the locking mechanism at all to protect the current user against updates, but rather it keeps a copy of the original record fetched, separate to the user changed version of the record, and it compares the original record against the one in the database when the lock is taken out.  If values don't match, be it the default compare-all-columns behaviour, or the more efficient Change Indicator mechanism, ADF BC will throw the JBO-25014 error. This leaves one last question.  Now we know the mechanism under which ADF identifies a changed row, what we don't know is what's changed and who changed it? The real culprit What's changed?  We know the record in the mid-tier has been changed by the user, however ADF doesn't use the changed record in the mid-tier to compare to the database record, but rather a copy of the original record before it was changed.  This leaves us to conclude the database record has changed, but how and by who? There are three potential causes: Database triggers The database trigger among other uses, can be configured to fire PLSQL code on a database table insert, update or delete.  In particular in an insert or update the trigger can override the value assigned to a particular column.  The trigger execution is actioned by the database on behalf of the user initiating the insert or update action. Why this causes the issue specific to our ADF use, is when we insert or update a record in the database via ADF, ADF keeps a copy of the record written to the database.  However the cached record is instantly out of date as the database triggers have modified the record that was actually written to the database.  Thus when we update the record we just inserted or updated for a second time to the database, ADF compares its original copy of the record to that in the database, and it detects the record has been changed – giving us JBO-25014. This is probably the most common cause of this problem. Default values A second reason this issue can occur is another database feature, default column values.  When creating a database table the schema designer can define default values for specific columns.  For example a CREATED_BY column could be set to SYSDATE, or a flag column to Y or N.  Default values are only used by the database when a user inserts a new record and the specific column is assigned NULL.  The database in this case will overwrite the column with the default value. As per the database trigger section, it then becomes apparent why ADF chokes on this feature, though it can only specifically occur in an insert-commit-update-commit scenario, not the update-commit-update-commit scenario. Instead of trigger views I must admit I haven't double checked this scenario but it seems plausible, that of the Oracle database's instead of trigger view (sometimes referred to as instead of views).  A view in the database is based on a query, and dependent on the queries complexity, may support insert, update and delete functionality to a limited degree.  In order to support fully insertable, updateable and deletable views, Oracle introduced the instead of view, that gives the view designer the ability to not only define the view query, but a set of programmatic PLSQL triggers where the developer can define their own logic for inserts, updates and deletes. While this provides the database programmer a very powerful feature, it can cause issues for our ADF application.  On inserting or updating a record in the instead of view, the record and it's data that goes in is not necessarily the data that comes out when ADF compares the records, as the view developer has the option to practically do anything with the incoming data, including throwing it away or pushing it to tables which aren't used by the view underlying query for fetching the data. Readers are at this point reminded that this article is specifically about how the JBO-25014 error occurs in the context of 1 developer on an isolated database.  The article is not considering how the error occurs in a production environment where there are multiple users who can cause this error in a legitimate fashion.  Assuming none of the above features are the cause of the problem, and optimistic locking is turned on (this error is not possible if pessimistic locking is the default mode *and* none of the previous causes are possible), JBO-25014 is quite feasible in a production ADF application if 2 users modify the same record. At this point under project timelines pressure, the obvious fix for developers is to drop both database triggers and default values from the underlying tables.  However we must be careful that these legacy constructs aren't used and assumed to be in place by other legacy systems.  Dropping the database triggers or default value that the existing Oracle Forms  applications assumes and requires to be in place could cause unexpected behaviour and bugs in the Forms application.  Proficient software engineers would recognize such a change may require a partial or full regression test of the existing legacy system, a potentially costly and timely exercise, not ideal. Solving the mystery once and for all Luckily ADF has built in functionality to deal with this issue, though it's not a surprise, as Oracle as the author of ADF also built the database, and are fully aware of the Oracle database's feature set.  At the Entity Object attribute level, the Refresh After Insert and Refresh After Update properties.  Simply selecting these instructs ADF BC after inserting or updating a record to the database, to expect the database to modify the said attributes, and read a copy of the changed attributes back into its cached mid-tier record.  Thus next time the developer modifies the current record, the comparison between the mid-tier record and the database record match, and JBO-25014: Another user has changed" is no longer an issue. [Post edit - as per the comment from Oracle's Steven Davelaar below, as he correctly points out the above solution will not work for instead-of-triggers views as it relies on SQL RETURNING clause which is incompatible with this type of view] Alternatively you can set the Change Indicator on one of the attributes.  This will work as long as the relating column for the attribute in the database itself isn't inadvertently updated.  In turn you're possibly just masking the issue rather than solving it, because if another developer turns the Change Indicator back on the original issue will return.

    Read the article

  • MERGE Bug with Filtered Indexes

    - by Paul White
    A MERGE statement can fail, and incorrectly report a unique key violation when: The target table uses a unique filtered index; and No key column of the filtered index is updated; and A column from the filtering condition is updated; and Transient key violations are possible Example Tables Say we have two tables, one that is the target of a MERGE statement, and another that contains updates to be applied to the target.  The target table contains three columns, an integer primary key, a single character alternate key, and a status code column.  A filtered unique index exists on the alternate key, but is only enforced where the status code is ‘a’: CREATE TABLE #Target ( pk integer NOT NULL, ak character(1) NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) );   CREATE UNIQUE INDEX uq1 ON #Target (ak) INCLUDE (status_code) WHERE status_code = 'a'; The changes table contains just an integer primary key (to identify the target row to change) and the new status code: CREATE TABLE #Changes ( pk integer NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) ); Sample Data The sample data for the example is: INSERT #Target (pk, ak, status_code) VALUES (1, 'A', 'a'), (2, 'B', 'a'), (3, 'C', 'a'), (4, 'A', 'd');   INSERT #Changes (pk, status_code) VALUES (1, 'd'), (4, 'a');          Target                     Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ a           ¦    ¦  1 ¦ d           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ d           ¦ +-----------------------+ The target table’s alternate key (ak) column is unique, for rows where status_code = ‘a’.  Applying the changes to the target will change row 1 from status ‘a’ to status ‘d’, and row 4 from status ‘d’ to status ‘a’.  The result of applying all the changes will still satisfy the filtered unique index, because the ‘A’ in row 1 will be deleted from the index and the ‘A’ in row 4 will be added. Merge Test One Let’s now execute a MERGE statement to apply the changes: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; The MERGE changes the two target rows as expected.  The updated target table now contains: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦ <—changed from ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦ <—changed from ‘d’ +-----------------------+ Merge Test Two Now let’s repopulate the changes table to reverse the updates we just performed: TRUNCATE TABLE #Changes;   INSERT #Changes (pk, status_code) VALUES (1, 'a'), (4, 'd'); This will change row 1 back to status ‘a’ and row 4 back to status ‘d’.  As a reminder, the current state of the tables is:          Target                        Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ d           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ a           ¦ +-----------------------+ We execute the same MERGE statement: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; However this time we receive the following message: Msg 2601, Level 14, State 1, Line 1 Cannot insert duplicate key row in object 'dbo.#Target' with unique index 'uq1'. The duplicate key value is (A). The statement has been terminated. Applying the changes using UPDATE Let’s now rewrite the MERGE to use UPDATE instead: UPDATE t SET status_code = c.status_code FROM #Target AS t JOIN #Changes AS c ON t.pk = c.pk WHERE c.status_code <> t.status_code; This query succeeds where the MERGE failed.  The two rows are updated as expected: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ a           ¦ <—changed back to ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ d           ¦ <—changed back to ‘d’ +-----------------------+ What went wrong with the MERGE? In this test, the MERGE query execution happens to apply the changes in the order of the ‘pk’ column. In test one, this was not a problem: row 1 is removed from the unique filtered index by changing status_code from ‘a’ to ‘d’ before row 4 is added.  At no point does the table contain two rows where ak = ‘A’ and status_code = ‘a’. In test two, however, the first change was to change row 1 from status ‘d’ to status ‘a’.  This change means there would be two rows in the filtered unique index where ak = ‘A’ (both row 1 and row 4 meet the index filtering criteria ‘status_code = a’). The storage engine does not allow the query processor to violate a unique key (unless IGNORE_DUP_KEY is ON, but that is a different story, and doesn’t apply to MERGE in any case).  This strict rule applies regardless of the fact that if all changes were applied, there would be no unique key violation (row 4 would eventually be changed from ‘a’ to ‘d’, removing it from the filtered unique index, and resolving the key violation). Why it went wrong The query optimizer usually detects when this sort of temporary uniqueness violation could occur, and builds a plan that avoids the issue.  I wrote about this a couple of years ago in my post Beware Sneaky Reads with Unique Indexes (you can read more about the details on pages 495-497 of Microsoft SQL Server 2008 Internals or in Craig Freedman’s blog post on maintaining unique indexes).  To summarize though, the optimizer introduces Split, Filter, Sort, and Collapse operators into the query plan to: Split each row update into delete followed by an inserts Filter out rows that would not change the index (due to the filter on the index, or a non-updating update) Sort the resulting stream by index key, with deletes before inserts Collapse delete/insert pairs on the same index key back into an update The effect of all this is that only net changes are applied to an index (as one or more insert, update, and/or delete operations).  In this case, the net effect is a single update of the filtered unique index: changing the row for ak = ‘A’ from pk = 4 to pk = 1.  In case that is less than 100% clear, let’s look at the operation in test two again:          Target                     Changes                   Result +-----------------------+    +------------------+    +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦    ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦    ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ d           ¦    ¦  1 ¦ A  ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦    ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+    ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦                            ¦  4 ¦ A  ¦ d           ¦ +-----------------------+                            +-----------------------+ From the filtered index’s point of view (filtered for status_code = ‘a’ and shown in nonclustered index key order) the overall effect of the query is:   Before           After +---------+    +---------+ ¦ pk ¦ ak ¦    ¦ pk ¦ ak ¦ ¦----+----¦    ¦----+----¦ ¦  4 ¦ A  ¦    ¦  1 ¦ A  ¦ ¦  2 ¦ B  ¦    ¦  2 ¦ B  ¦ ¦  3 ¦ C  ¦    ¦  3 ¦ C  ¦ +---------+    +---------+ The single net change there is a change of pk from 4 to 1 for the nonclustered index entry ak = ‘A’.  This is the magic performed by the split, sort, and collapse.  Notice in particular how the original changes to the index key (on the ‘ak’ column) have been transformed into an update of a non-key column (pk is included in the nonclustered index).  By not updating any nonclustered index keys, we are guaranteed to avoid transient key violations. The Execution Plans The estimated MERGE execution plan that produces the incorrect key-violation error looks like this (click to enlarge in a new window): The successful UPDATE execution plan is (click to enlarge in a new window): The MERGE execution plan is a narrow (per-row) update.  The single Clustered Index Merge operator maintains both the clustered index and the filtered nonclustered index.  The UPDATE plan is a wide (per-index) update.  The clustered index is maintained first, then the Split, Filter, Sort, Collapse sequence is applied before the nonclustered index is separately maintained. There is always a wide update plan for any query that modifies the database. The narrow form is a performance optimization where the number of rows is expected to be relatively small, and is not available for all operations.  One of the operations that should disallow a narrow plan is maintaining a unique index where intermediate key violations could occur. Workarounds The MERGE can be made to work (producing a wide update plan with split, sort, and collapse) by: Adding all columns referenced in the filtered index’s WHERE clause to the index key (INCLUDE is not sufficient); or Executing the query with trace flag 8790 set e.g. OPTION (QUERYTRACEON 8790). Undocumented trace flag 8790 forces a wide update plan for any data-changing query (remember that a wide update plan is always possible).  Either change will produce a successfully-executing wide update plan for the MERGE that failed previously. Conclusion The optimizer fails to spot the possibility of transient unique key violations with MERGE under the conditions listed at the start of this post.  It incorrectly chooses a narrow plan for the MERGE, which cannot provide the protection of a split/sort/collapse sequence for the nonclustered index maintenance. The MERGE plan may fail at execution time depending on the order in which rows are processed, and the distribution of data in the database.  Worse, a previously solid MERGE query may suddenly start to fail unpredictably if a filtered unique index is added to the merge target table at any point. Connect bug filed here Tests performed on SQL Server 2012 SP1 CUI (build 11.0.3321) x64 Developer Edition © 2012 Paul White – All Rights Reserved Twitter: @SQL_Kiwi Email: [email protected]

    Read the article

  • CodePlex Daily Summary for Wednesday, June 13, 2012

    CodePlex Daily Summary for Wednesday, June 13, 2012Popular ReleasesPublic Key Infrastructure PowerShell module: PowerShell PKI Module v1.8: Installation guide: Use default installation path to install this module for current user only. To install this module for all users — enable "Install for all users" check-box in installation UI Note: if previous module installations are detected, they are removed during upgrade. Note: PowerShell 3.0 RC is now supported. Note: Windows Server 2012 is partially supported (output formatting is not working for me). Release notes for version 1.8.0: Version 1.8 introduces a set of new .NET clas...Metodología General Ajustada - MGA: 02.07.02: Cambios Parmenio: Corrección para que se generen los objetivos en el formato PRO_F03. Se debe ejcutar el script en la BD. Cambios John: Soporte técnico telefónico y por correo electrónico. Integración de código fuente. Se ajustan los siguientes formatos para que no dupliquen los anexos al hacer clic dos veces en Actualizar: FORMATO ID 03 - ANALISIS DE PARTICIPANTES, FORMATO ID 05 - OBJETIVOS.Generación de instaladores: Conectado y Desconectado.BlackJumboDog: Ver5.6.4: 2012.06.13 Ver5.6.4  (1) Web???????、???POST??????????????????Yahoo! UI Library: YUI Compressor for .Net: Version 2.0.0.0 - Ferret: - Merging both 3.5 and 2.0 codebases to a single .NET 2.0 assembly. - MSBuild Task. - NAnt Task.ExcelFileEditor: .CS File: nothingBizTalk Scheduled Task Adapter: Release 4.0: Works with BizTalk Server 2010. Compiled in .NET Framework 4.0. In this new version are available small improvements compared to the current version (3.0). We can highlight the following improvements or changes: 24 hours support in “start time” property. Previous versions had an issue with setting the start time, as it shown 12 hours watch but no AM/PM. Daily scheduler review. Solved a small bug on Daily Properties: unable to switch between “Every day” and “on these days” Installation e...Weapsy - ASP.NET MVC CMS: 1.0.0 RC: - Upgrade to Entity Framework 4.3.1 - Added AutoMapper custom version (by nopCommerce Team) - Added missed model properties and localization resources of Plugin Definitions - Minor changes - Fixed some bugsQTP FT Uninstaller: QTP FT Uninstaller v3: - KnowledgeInbox has made this tool open source - Converted to C# - Better scanning & cleaning mechanismWebSocket4Net: WebSocket4Net 0.7: Changes included in this release: updated ClientEngine added proper exception handling code added state support for callback added property AllowUnstrustedCertificate for JsonWebSocket added properties for sending ping automatically improved JsBridge fixed a uri compatibility issueXenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.8.0 Beta: Catalog and Publication reviews and ratings Store language packs in data base Improve reporting system Improve Import/Export system A lot of WebAdmin app UI improvements Initial implementation of the WebForum app DB indexes Improve and simplify architecture Less abstractions Modernize architecture Improve, simplify and unify API Simplify and improve testing A lot of new unit tests Codebase refactoring and ReSharpering Utilize Castle Windsor Utilize NHibernate ORM ...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.55: Properly handle IE extension to CSS3 grammar that allows for multiple parameters to functional pseudo-class selectors. add new switch -braces:(new|same) that affects where opening braces are placed in multi-line output. The default, "new" puts them on their own new line; "same" outputs them at the end of the previous line. add new optional values to the -inline switch: -inline:(force|noforce), which can be combined with the existing boolean value via comma-separators; value "force" (which...Microsoft Media Platform: Player Framework: MMP Player Framework 2.7 (Silverlight and WP7): Additional DownloadsSMFv2.7 Full Installer (MSI) - This will install everything you need in order to develop your own SMF player application, including the IIS Smooth Streaming Client. It only includes the assemblies. If you want the source code please follow the link above. Smooth Streaming Sample Player - This is a pre-built player that includes support for IIS Smooth Streaming. You can configure the player to playback your content by simplying editing a configuration file - no need to co...Liberty: v3.2.1.0 Release 10th June 2012: Change Log -Added -Liberty is now digitally signed! If the certificate on Liberty.exe is missing, invalid, or does not state that it was developed by "Xbox Chaos, Open Source Developer," your copy of Liberty may have been altered in some (possibly malicious) way. -Reach Mass biped max health and shield changer -Fixed -H3/ODST Fixed all of the glitches that users kept reporting (also reverted the changes made in 3.2.0.2) -Reach Made some tag names clearer and more consistent between m...AutoUpdaterdotNET : Autoupdate for VB.NET and C# Developer: AutoUpdater.NET 1.0: Everything seems perfect if you find any problem you can report to http://www.rbsoft.org/contact.htmlMedia Companion: Media Companion 3.503b: It has been a while, so it's about time we release another build! Major effort has been for fixing trailer downloads, plus a little bit of work for episode guide tag in TV show NFOs.Microsoft SQL Server Product Samples: Database: AdventureWorks Sample Reports 2008 R2: AdventureWorks Sample Reports 2008 R2.zip contains several reports include Sales Reason Comparisons SQL2008R2.rdl which uses Adventure Works DW 2008R2 as a data source reference. For more information, go to Sales Reason Comparisons report.Json.NET: Json.NET 4.5 Release 7: Fix - Fixed Metro build to pass Windows Application Certification Kit on Windows 8 Release Preview Fix - Fixed Metro build error caused by an anonymous type Fix - Fixed ItemConverter not being used when serializing dictionaries Fix - Fixed an incorrect object being passed to the Error event when serializing dictionaries Fix - Fixed decimal properties not being correctly ignored with DefaultValueHandlingLINQ Extensions Library: 1.0.3.0: New to release 1.0.3.0:Combinatronics: Combinations (unique) Combinations (with repetition) Permutations (unique) Permutations (with repetition) Convert jagged arrays to fixed multidimensional arrays Convert fixed multidimensional arrays to jagged arrays ElementAtMax ElementAtMin ElementAtAverage New set of array extension (1.0.2.8):Rotate Flip Resize (maintaing data) Split Fuse Replace Append and Prepend extensions (1.0.2.7) IndexOf extensions (1.0.2.7) Ne...????????API for .Net SDK: SDK for .Net ??? Release 1: 6?12????? ??? - ?????.net2.0?.net4.0????Winform?Web???????。 NET2 - .net 2.0?Winform?Web???? NET4 - .net 4.0?Winform?Web???? Libs - SDK ???,??2.0?4.0?SDK??? 6?11????? ??? - ?Entities???????????EntityBase,???ToString()???????json???,??????4.0???????。2.0?3.5???! ??? - Request????????AccessToken??????source=appkey?????。????,????????,???????public_timeline?????????。 ?? - ???ClinetLogin??????????RefreshToken???????false???。 ?? - ???RepostTimeline????Statuses???null???。 ?? - Utility?Bui...Audio Pitch & Shift: Audio Pitch And Shift 4.5.0: Added Instruments tab for modules Open folder content feature Some bug fixesNew Projects4SQ - Foursquare for WP7: This is a public Foursquare App, designed for the COMMUNITY for WP7.Alert Management Web Part: Business Problem: In a Project management site, When new project sites created, Project Manager creates alerts for all team members in multiple Lists/Libraries. Pain point is: He need to manually create alerts by going to each and every list/library >> Alert Me >> and then adding users to keep them informed. Yes, Its a pain for them to add users one by one in all of the lists and libraries, Solution: we started creating Custom Alert Management web part which will solve this problem. K...An Un-sql SQL Database Installer: D.E.M.O.N SQL Database Installer is an Un-SQL(SQL means boring here) Database server installer for many types of Database :MySQL, SQLite and others.Aqzgf: android for qzgfbaoming: ????????????,???dbentry.net 4.2??????。Comdaybe 2012_knockoutjs examples: This is a visual studio 2012 project containing all the demo's of #comdaybe 2012. more information on: http://www.communityday.be/ETExplorer: This application is a replacement for the standard windows explorer. When I began working with Windows 7 I noticed that some of the features I liked from XP were missing. After trying many freeware replacements and not finding any of them having all the features I needed, I decided to develop my own explorer replacement.ExamAnalysis: Exam Analysis Website (building...)FizzBuzz: FizzBuzzImproved Dnn Event Log Email Notification provider: The email notifications for log events in Dnn are poor. This project aims to improve them through a new logging provider. The initial work uses the existing logging provider and just overrides the SendLogNotifications method. Over time hopefully this will be improved to enhance the email sending capabilities of Dnn.InfoMap: a framework for mapsiTextSharp MPL for Silverlight & Windows Phone: This is a port of the iText Sharp (4.2) that was the last version published under the Mozilla Public License for Silverlight and Windows Phone. You can use this version in a commercial application and it does not require your app to be released as open source. You do not need any kind of license from anyone to use this unless you want to change the publishing information (where it sets iText as the publisher)ModifyMimeTypes: In Exchange 2007 and 2010, the Content-Type encoding for MIME attachments is determined by a list which is stored in the Exchange Organization container in Active Directory within the msExchMimeTypes property. The list can be easily viewed in the Exchange Management Shell using: •(Get-OrganizationConfig).MimeTypes The list is order specific. So if two entries exist for the same attachment type, the one earlier in the list will be the one that is used. One example of this is the encod...Monitor Reporting Services with PowerPivot: PowerPivot workbook with a connection to Reporting Service log. Several built in reports are already created in the workbook. You will need to connect to the execution log 2 view in your report server database. Typically the report server Database is named ReportServer. The view to query is named is ExecutionLog2. You may need to add a where clause to the query in the workbook to limit the data you want to see. I already have a where cluase that limits it to only rendered reports. The Date...QCats: QCats put hearts into socail relationship.Rad sa bazom: Ovo je samo proba, brsem brzoREJS: An easy to use, easy to implement Razor preparser for CSS and JS with support for caching, external data, LESS and much more.RexSharp: RexSharp is a relatively new web framework, that can easily be integrated with ASP.NET. It is based largely on the Websharper framework, written in F# - but with significant additions and optimizations. Like Websharper, RexSharp aims to make web and mobile app development a lot more fun, with F#. This is a preliminary release - detailed instructions for using the framework will be made available in a short while. ShareDev.Webparts: Coming soon...SharePoint Common Framework: SharepointCommon is a framework for Microsoft SharePoint 2010© Server and Foundation. It allows to map list items to simple hand-writen POCO classes and perform actions by manipulate with entities.Solution Extender for Microsoft Dynamics CRM 2011: Solution Extender makes it easier for Dynamics CRM 2011 integrators to export/import components that can't be included in solutions. You will be able to export/import Duplicate detection rules, Saved views and more to come It's developed in C#.System Center 2012 - Orchestrator Integration Packs and Utilities: System Center 2012 - Orchestrator Integration Packs and UtilitiesTalkBack: A small library to enable decoupled communication between different components.Tomson Scattering for GDT: Tomson is a software used on Gas Dynamic Trap setup in Institute of Nuclear Physics of Russian Academy of Sciences for electron temperature calculation in plasma based on tomson scattering method measurements.tony first project: first projectVirtual Card Table: This is a classroom project for explaining TDD.WellFound Yachts: WellFound Yachts

    Read the article

< Previous Page | 50 51 52 53 54 55  | Next Page >