Search Results

Search found 1110 results on 45 pages for 'constraint'.

Page 26/45 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • CodePlex Daily Summary for Tuesday, August 28, 2012

    CodePlex Daily Summary for Tuesday, August 28, 2012Popular ReleasesImageServer: v1.1: This is the first version releasedChristoc's DotNetNuke Module Development Template: DotNetNuke Project Templates V1.1 for VS2012: This release is specifically for Visual Studio 2012 Support, distributed through the Visual Studio Extensions gallery at http://visualstudiogallery.msdn.microsoft.com/ Check out the blog post for all of the details about this release. http://www.dotnetnuke.com/Resources/Blogs/EntryId/3471/New-Visual-Studio-2012-Project-Templates-for-DotNetNuke.aspx If you need a project template for older versions of Visual Studio check out our previous releases.Home Access Plus+: v8.0: v8.0828.1800 RELEASE CHANGED TO BETA Any issues, please log them on http://www.edugeek.net/forums/home-access-plus/ This is full release, NO upgrade ZIP will be provided as most files require replacing. To upgrade from a previous version, delete everything but your AppData folder, extract all but the AppData folder and run your HAP+ install Documentation is supplied in the Web Zip The Quota Services require executing a script to register the service, this can be found in there install di...Math.NET Numerics: Math.NET Numerics v2.2.0: Major linear algebra rework since v2.1, now available on Codeplex as well (previous versions were only available via NuGet). Also available as NuGet packages: PM> Install-Package MathNet.Numerics PM> Install-Package MathNet.Numerics.FSharp New: instead of the special Silverlight build we now provide a portable version supporting .Net 4, Silverlight 5 and .Net Core (WinRT) 4.5: PM> Install-Package MathNet.Numerics.PortablePhalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3391 (September 2012): New features: Extended ReflectionClass libxml error handling, constants TreatWarningsAsErrors MSBuild option OnlyPrecompiledCode configuration option; allows to use only compiled code Fixes: ArgsAware exception fix accessing .NET properties bug fix ASP.NET session handler fix for OutOfProc mode Phalanger Tools for Visual Studio: Visual Studio 2010 & 2012 New debugger engine, PHP-like debugging Lot of fixes of project files, formatting, smart indent, colorization etc. Improved ...WatchersNET CKEditor™ Provider for DotNetNuke®: CKEditor Provider 1.14.06: Whats New Added CKEditor 3.6.4 oEmbed Plugin can now handle short urls changes The Template File can now parsed from an xml file instead of js (More Info...) Style Sets can now parsed from an xml file instead of js (More Info...) Fixed Showing wrong Pages in Child Portal in the Link Dialog Fixed Urls in dnnpages Plugin Fixed Issue #6969 WordCount Plugin Fixed Issue #6973 File-Browser: Fixed Deleting of Files File-Browser: Improved loading time File-Browser: Improved the loa...MabiCommerce: MabiCommerce 1.0.1: What's NewSetup now creates shortcuts Fix spelling errors Minor enhancement to the Map window.ScintillaNET: ScintillaNET 2.5.2: This release has been built from the 2.5 branch. Version 2.5.2 is functionally identical to the 2.5.1 release but also includes the XML documentation comments file generated by Visual Studio. It is not 100% comprehensive but it will give you Visual Studio IntelliSense for a large part of the API. Just make sure the ScintillaNET.xml file is in the same folder as the ScintillaNET.dll reference you're using in your projects. (The XML file does not need to be distributed with your application)....WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.0: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...BlackJumboDog: Ver5.7.1: 2012.08.25 Ver5.7.1 (1)?????·?????LING?????????????? (2)SMTP???(????)????、?????\?????????????????????Visual Studio Team Foundation Server Branching and Merging Guide: v2 - Visual Studio 2012: Welcome to the Branching and Merging Guide Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has been through an independent technical review Documentation has been reviewed by the quality and recording team All critical bugs have been resolved Known Issues / Bugs Spelling, grammar and content revisions are in progress. Hotfix will be published.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.62: Fix for issue #18525 - escaped characters in CSS identifiers get double-escaped if the character immediately after the backslash is not normally allowed in an identifier. fixed symbol problem with nuget package. 4.62 should have nuget symbols available again. Also want to highlight again the breaking change introduced in 4.61 regarding the renaming of the DLL from AjaxMin.dll to AjaxMinLibrary.dll to fix strong-name collisions between the DLL and the EXE. Please be aware of this change and...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.65: As some of you may know we were planning to release version 2.70 much later (the end of September). But today we have to release this intermediate version (2.65). It fixes a critical issue caused by a third-party assembly when running nopCommerce on a server with .NET 4.5 installed. No major features have been introduced with this release as our development efforts were focused on further enhancements and fixing bugs. To see the full list of fixes and changes please visit the release notes p...MyRouter (Virtual WiFi Router): MyRouter 1.2.9: . Fix: Some missing changes for fixing the window subclassing crash. · Fix: fixed bug when Run MyRouter at the first Time. · Fix: Log File · Fix: improve performance speed application · fix: solve some Exception.Private cloud DMS: Essential server-client full package: Requirements: - SQL server >= 2008 (minimal Express - for Essential recommended) - .NET 4.0 (Server) - .NET 4.0 Client profile (Client) This version allow: - full file system functionality Restrictions: - Maximum 2 parallel users - No share spaces - No hosted business groups - No digital sign functionality - No ActiveDirectory connector - No Performance cache - No workflow - No messagingJavaScript Prototype Extensions: Release 1.1.0.0: Release 1.1.0.0 Add prototype extension for object. Add prototype extension for array.Glyphx: Version 1.2: This release includes the SdlDotNet.dll dependency in the setup, which you will need.TFS Project Test Migrator: TestPlanMigration v1.0.0: Release 1.0.0 This first version do not create the test cases in the target project because the goal was to restore a Test Plan + Test Suite hierarchy after a manual user deletion without restoring all the Project Collection Database. As I discovered, deleting a Test Plan will do the following : - Delete all TestSuiteEntry (the link between a Test Suite node and a Test Case) - Delete all TestSuite (the nodes in the test hierarchy), including root TestSuite - Delete the TestPlan Test c...ERPStore eCommerce FrontOffice: ERPStore.Core V4.0.0.2 MVC4 RTM: ERPStore.Core V4.0.0.2 MVC4 RTM (Code Source)ZXing.Net: ZXing.Net 0.8.0.0: sync with rev. 2393 of the java version improved API, direct support for multiple barcode decoding, wrapper for barcode generating many other improvements and fixes encoder and decoder command line clients demo client for emguCV dev documentation startedNew ProjectsA Constraint Propogation Solver in F#: An experimental implementation of the Variable Consistency "Bucket Elimination" algorithm for constraint propogationAirline Pilot Academy: Taller de Sistemas de Información - UCB ArchiveManageSys: Something about archive managementCriteria Workflow Engine: A C++ Workflow Engine: Desing and execute business process.DnnDash Service: The DnnDash project enables administrators of DotNetNuke websites to view any installed DotNetNuke Dashboard components via other devices.Dynamics NAV UniWPF Addin: This project is Addin control for Microsoft Dynamics NAV 2009, allowing developers to put WPF controls directly to NAV page.Essai Salon de Chat: petit projet de communication basé sur une structure server / client(multi)High performance C# byte array to hex string to byte array: The performance key point for each to/from conversion is the (perpetual) repetition of the same if blocks and calculations...ImageCloudLock Backup Solution: ImageCloudLock 2012 is designed to backup your important items, like photos, financial documents, PDFs, excel documents and personal items to the Cloud service.Login with Facebook in ASP.Net MVC3 & Get data from Facebook User: This project is useful for ASP.Net MVC developer. For login with Facebook in ASP.Net MVC3 & Get data from user's facebook account.Lucy2D: Projet for developing 2D Games based on WPF and XNA. The point is to minimize effort for developers and fast prototyping.Main Street: Human Resources software for small business. Using C# and SQL Server Express.ManagedZFS: A managed implementation of the ZFS filesystem. Reliability, performance, and manageability. Also, *block-pointer rewrite* !!!Metro English Persian Dictionary: This is an English - Persian (Farsi) dictionary designed and developed for Windows 8 Metro User Interface which contains over 50000 words. My Personal Site: My Personal SiteNeverball Framework: Neverball FrameworkSample code: This project is simple a common location for me to store project skeletons and snippets for help bootstrapping other projects.Smart Card Fuzzer: SCFuzz is a smart card middleware fuzz testing tool (fuzzer). Using API hooking, SCFuzz modifies data returned by the card in order to find bugs in the host.Test Activation: Start looking into it.Vector, a .net generic collection to use instead of a List - in niche cases.: The Vector can be used as a replacement for a List if a large number of elements must be stored, and element inserts and deletes are frequent.WCF duplex Message: This is test project uses WCF to implement message broadcast.WCF! WTF?: Getting to know WCFZune to Lync Now Playing: Zune to Lync Now Playing is a simple application that let you display on your Lync Personal Note, the current song playing in Zune Player.

    Read the article

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • What do you do if you reach a design dead-end in evolutionary methods like Agile or XP?

    - by Dipan Mehta
    As I was reading Martin Fowler's famous blog post Is Design Dead?, one of the striking impressions I got is that given the fact that in Agile Methodology and Extreme Programming, the design as well as programming is evolutionary, there are always points where things need to get refactored. It may be possible that when a programmer's level is good, and they understand design implications and don't make critical mistakes, the code continues to evolve. However, in a normal context, what is the ground reality in this context? In a normal day given some significant development goes into product, and when critical change occurs in requirement isn't it a constraint that how much ever we wish, fundamental design aspects cannot be modified? (without throwing away major part of the code). Is it not quite likely that one reaches dead-end on any further possible improvement on design and requirements? I am not advocating any non-Agile practice here, but I want to know from people who practice agile or iterative or evolutionary development methods, as for their real experiences. Have you ever reached such dead-ends? How have you managed to avoid it or escaped it? Or are there measures to ensure that design remains clean and flexible as it evolves?

    Read the article

  • Materialized View does not import properly when importing on a second instance of a database

    - by marinus
    When I import a database with materialized view mv_mt in just one database (Oracle) everything is ok. create materialized view mv_mt refresh complete next trunc( sysdate ) + 1 as SELECT sysdate, media_type.* from media_type; But when I try to import the same database to a copy in another schema I get the following errors: IMP-00017: following statement failed with ORACLE error 1: "BEGIN DBMS_JOB.ISUBMIT(JOB=438,WHAT='dbms_refresh.refresh(''"ALEXANDRA"" "."MV_MT"'');',NEXT_DATE=TO_DATE('2012-07-02:14:22:36','YYYY-MM-DD:HH24:MI:" "SS'),INTERVAL='sysdate + 1 / 24 / 60 / 6 ',NO_PARSE=TRUE); END;" IMP-00003: ORACLE error 1 encountered ORA-00001: unique constraint (SYS.I_JOB_JOB) violated ORA-06512: at "SYS.DBMS_JOB", line 100 ORA-06512: at line 1 IMP-00017: following statement failed with ORACLE error 23421: "BEGIN dbms_refresh.make('"ALEXANDRA"."MV_MT"',list=null,next_date=null," "interval=null,implicit_destroy=TRUE,lax=FALSE,job=438,rollback_seg=NUL" "L,push_deferred_rpc=TRUE,refresh_after_errors=FALSE,purge_option = 1,par" "allelism = 0,heap_size = 0); END;" IMP-00003: ORACLE error 23421 encountered ORA-23421: job number 438 is not a job in the job queue ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86 ORA-06512: at "SYS.DBMS_IJOB", line 793 ORA-06512: at "SYS.DBMS_REFRESH", line 86 ORA-06512: at "SYS.DBMS_REFRESH", line 62 ORA-06512: at line 1 IMP-00017: following statement failed with ORACLE error 23410: "BEGIN dbms_refresh.add(name='"ALEXANDRA"."MV_MT"',list='"ALEXANDRA"."MV" "_MT"',siteid=0,export_db='ORCL01'); END;" IMP-00003: ORACLE error 23410 encountered ORA-23410: materialized view "ALEXANDRA"."MV_MT" is already in a refresh group ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95 ORA-06512: at "SYS.DBMS_IREFRESH", line 484 ORA-06512: at "SYS.DBMS_REFRESH", line 140 ORA-06512: at "SYS.DBMS_REFRESH", line 125 ORA-06512: at line 1 Anyone any ideas? Regards, Marinus

    Read the article

  • Set up Work Manager Shutdown Trigger in WebLogic Server 10.3.4 Using WLST

    - by adejuanc
    WebLogic Server's Work Managers provide a way to control work and allocated threads. You can set different scheduling guidelines for different applications, depending on your requirements. There is a default self-tuning Work Manager, but you might want to set up a custom work manager in some circumstances: for example, when you want the server to prioritize one application over another when a response time goal is required, or when a minimum thread constraint is needed to avoid deadlock. The Work Manager Shutdown Trigger is a tool to help with stuck threads in which will do the following: Shut down the Work Manager. Move the application to Admin State (not active). Change the Server instance health state to failed. Example of a Shutdown Trigger set on the config.xml for your domain: <work-manager>   <name>stuckthread_workmanager</name>   <work-manager-shutdown-trigger>     <max-stuck-thread-time>30</max-stuck-thread-time>     <stuck-thread-count>2</stuck-thread-count>   </work-manager-shutdown-trigger> </work-manager> Understand that any misconfiguration on the Work Manager can lead to poor performance on the server. Any changes must be done and tested before going to production. How can one create a WorkManagerShutdownTrigger for WLS 10.3.4 using WLST? You should be able to create a WorkManagerShutdownTrigger using WLST by following these steps: edit() startEdit() cd('/SelfTuning/mydomain/WorkManagers') create('myWM','WorkManager') cd('myWM/WorkManagerShutdownTrigger') create('myWMst','WorkManagerShutdownTrigger') cd('myWMst') ls()

    Read the article

  • Automated testing tool development challenges (for embedded software)

    - by Karthi prime
    My boss want to come up with the proposal for the following tool: An IDE: Able to build, compile, debug, via JTAG programming for the micro-controller. A Test Suite, reads the code in the IDE, auto generates the test cases, and it gives the in-target unit testing results(which is done by controlling code execution in the micro-controller via IDE). A no-overhead code coverage tool which interacts with the test suite and IDE. My work is to obtain the high level architecture of this tool, so as to proceed further. My current knowledge: There are tool-chains available from the chip manufacturer for the micro-controllers which can be utilized along with an open-source IDE like Eclipse, and along with an open-source burner, a complete IDE for a micro-controller can be done. Test cases can be auto-generated by reading the source file through the process of parsing, scripting, based on keywords. Test suite must be able to command the IDE to control, through breakpoints, and read the register contents from the microcontroller - This enables the in-target unit testing. An no-overhead code coverage should be done by no-overhead code instrumentation so as to execute those in the resource constraint environment of the micro-controller. I have the following questions: Any advice on the validity of my understanding? What are the challenges I will have during the development? What are the helpful open-source tools regarding this? What is the development time for this software? Thanks

    Read the article

  • Adjusting the rate of movement of different objects on the same timer

    - by theUg
    I have a series of objects moving along the straight lines. I want to implement slight changes of velocity of each of the object. Constraint is existing model of animation. I am new to this, and not sure if it is the best way to accommodate varying speeds, but what do I know? It is a Java application that repaints the panel every time the timer expires. Timer is set via swing.Timer object that is set by timer delay constant. Every time the game is stepped objects’ coordinates advanced by an increment constant. Most of the objects are of the same class. Is there fairly easy way to refactor existing system to allow changing velocity for an individual object? Is there some obvious common solution I am not aware about? Idea I am having right now is to set timer delay fairly small, and only move objects every so many cycles of animation so that the apparent speed can be adjusted by varying how often they get moved. But that seems fairly involved, and I do not think it is the most elegant solution in terms of performance what with repainting the whole frame every 3-5 milliseconds. Can it be done by advancing the objects so many (varying) times during the certain interval (let’s say 35ms for something like 28fps), and use repaint() method to redraw just individual object? Do I need to mess with pausing animation for smoothness at higher redraw rates? Is it common practise to check for collision at larger step interval, but draw animation a lot more frequently?

    Read the article

  • Triangulating a partially triangulated mesh (2D)

    - by teodron
    Referring to the above exhibits, this is the scenario I am working with: starting with a planar graph (in my case, a 2D mesh) with a given triangulation, based on a certain criterion, the graph nodes are labeled as RED and BLACK. (A) a subgraph containing all the RED nodes (with edges between only the directly connected neighbours) is formed (note: although this figure shows a tree forming, it may well happen that the subgraph contain loops) (B) Problem: I need to quickly build a triangulation around the subgraph (e.g. as shown in figure C), but under the constraint that I have to keep the already present edges in the final result. Question: Is there a fast way of achieving this given a partially triangulated mesh? Ideally, the complexity should be in the O(n) class. Some side-remarks: it would be nice for the triangulation algorithm to take into account a certain vertex priority when adding edges (e.g. it should always try to build a "1-ring" structure around the most important nodes first - I can implement iteratively such a routine, but it's O(n^2) ). it would also be nice to reflect somehow the "hop distance" when adding edges: add edges first between the nodes that were "closer" to each other given the start topology. Nevertheless, disregarding the remarks, is there an already known scenario similar to this one where a triangulation is built upon a partially given set of triangles/edges?

    Read the article

  • Sharding / indexing strategy for multi-faceted search

    - by Graham
    I'm currently thinking about our database structure and how we modify it for scale. Specifically, we're thinking about using ElasticSearch to provide our search functionality. One common pattern with ElasticSearch seems to be the 'user-routing' pattern; that is, using routing to ensure that any one user's data resides on the same shard. This is great for client-specific search e.g. Gmail. Our application has a constraint such that any user will have a maximum of a few thousand documents, so this pattern seems like a good candidate. However, our search needs to work across all users, as well as targeting a specific user (so I might search my content, Alice's content, or all content). Similarly, we need to provide full-text search across any timeframe; recent months to several years ago. I'm thinking of combining the 'user-routing' and 'index-per-time-interval' patterns: I create an index for each month By default, searches are aliased against the most recent X months If no results are found, we can search against previous X months As we grow, we can reduce the interval X Each document is routed by the user ID So, this should let us do the following: search by user. This will search all indeces across 1 shard search by time. This will search ~2 indeces (by default) across all shards Is this a reasonable approach, considering we may scale to multi-million+ documents? Or should I be denormalizing the data somehow, so that user searches are performed on a totally seperate index from date searches? Thanks for any pros-cons of the above scenario.

    Read the article

  • Get to No as fast as possible

    - by Tim Hibbard
    There is a sales technique where the strategy is to get the customer to say “No deal” as soon as possible.  The idea being that by establishing terms that your customer is not comfortable with with, the sooner you can figure out what they will be willing to agree to.  The same principal can be applied to code design.  Instead of nested if…then statements, a code block should quickly eliminate the cases it is not equipped to handle and just focus on what it is meant to handle. This is code that will quickly become maintainable as requirements change: private void SaveClient(Client c) { if (c != null) { if (c.BirthDate != DateTime.MinValue) { foreach (Sale s in c.Sales) { if (s.IsProcessed) { SaveSaleToDatabase(s); } } SaveClientToDatabase(c); } } }   If an additional requirement comes along that requires the Client to have Manager approval or for a Sale to be under $20K, this code will get messy and unreadable. A better way to meet the same requirements would be: private void SaveClient(Client c) { if (c == null) { return; } if (c.BirthDate == DateTime.MinValue) { return; }   foreach (Sale s in c.Save) { if (!s.IsProcessed) { continue; } SaveSaleToDatabase(s); } SaveClientToDatabase(c); } This technique moves on quickly when it finds something it doesn’t like.  This makes it much easier to add a Manager approval constraint.  We would just insert the new requirement before the action takes place.

    Read the article

  • NHibernate Pitfalls: Cascades

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. For entities that have associations – one-to-one, one-to-many, many-to-one or many-to-many –, NHibernate needs to know what to do with their related entities, in three particular moments: when saving, updating or deleting. In particular, there are two possible behaviors: either ignore these related entities or cascade changes to them. NHibernate allows setting the cascade behavior for each association, and the default behavior is not to cascade (ignore). The possible cascade options are: None Ignore, this is the default Save-Update If the entity is being saved or updated, also save any related entities that are either not saved or have been modified and associate these related entities to the root entity. Generally safe Delete If the entity is being deleted, also delete the related entities. This is only useful for parent-child relations Delete-Orphan Identical to Delete, with the addition that if once related entity is removed from the association – orphaned –, also delete it. Also only for parent-child All Combination of Save-Update and Delete, usually that’s what we want (for parent-child relations, of course) All-Delete-Orphan Same as All plus delete any related entities who lose their relationship In summary, Save-Update is generally what you want in most cases. As for the Delete variations, they should only be used if the related entities depend on the root entity (parent-child), so that deleting the root entity and not their related entities would result in a constraint violation on the database.

    Read the article

  • Legacy Code Retreat Questions

    - by MarkPearl
    I recently heard of the concept of a Legacy Code Retreat. Since I have attended and helped facilitate some normal Code Retreats I thought it might be interesting in trying a Legacy Code Retreat, but I have a few questions on how a legacy CR differs from a normal one. If anyone has attended a Legacy CR and has some suggestions on how best to host these event’s please leave a comment on what has worked for you in the past or if you have any answers to my questions below… Should you restrict the languages that people can do the sessions in? In the normal CR’s I have been involved in the past we have had people attend and code in their programming language of choice. A normal CR lends itself to  this because each session starts with no code. With a legacy CR each session seems to start with an existing code base. Is there some sort of limitation on the languages that people can work in during the sessions? If not, how do you give them a base to start from? What happens as the beginning of each session? In the normal CR that I have attended each session would have a constraint set on it – i.e. no if statements used, no primitives, etc. With a legacy CR it seems more like patterns for refactoring are learned. Does the facilitator explain the pattern used before the session starts or are they just given a code base to start from and an objective to achieve

    Read the article

  • How is determined an impact of a requirement change on the existing code?

    - by MainMa
    Hi, How companies working on large projects evaluate an impact of a single modification on an existing code? Since my question is probably not very clear, here's an example: Let's take a sample business application which deals with tasks. In the database, each task has a state, 0 being "Pending", ... 5 - "Finished". A new requirement adds a new state, between 2nd and 3rd one. It means that: A constraint on the values 1 - 5 in the database must be changed, Business layer and code contracts must be changed to add a new state, Data access layer must be changed to take in account that, for example the state StateReady is now 6 instead of 5, etc. The application must implement a new state visually, add new controls for it, new localized strings for tool-tips, etc. When an application is written recently by one developer, it's more or less easy to predict every change to do. On the other hand, when an application was written for years by many people, no single person can anticipate every change immediately, without any investigation. So since this situation (such changes in requirements) is very frequent, I imagine there are already some clever techniques and ways to predict the impact. Is there any? Do you know any books which deal about this subject? Note: my question is not related to How do you deal with changing requirements? question. In fact, I'm not interested in evaluating the cost of a change, but rather the way to predict the parts of an application which will be concerned by the change. What will be those changes and how difficult they are really doesn't matter in my question.

    Read the article

  • Problems dualbooting Ubuntu because of UEFI

    - by Koffeehaus
    I have an X-series Asus laptop which I just bough about a month ago. I want to dualboot Ubuntu - Windows. I can easily access LiveUSB with both UEFI enabled and disabled. I heard that there were problems with UEFI, so I disabled it. After I've installed the system I couldn't access it. It just boots to Windows straight. Another unusual thing, that never happened to me before was that the partition editor wanted me to create a BIOS reserved area, which I did, but not at the beginning of the table. Any ideas how to access the Ubuntu partition? As far as I can guess both Windows and Ubuntu have to be both of the same type of boot, either Legacy or EFI. This is not the case of what I have now. So, if I reinstall Ubuntu in UEFI mode that correlates with my Windows type, will I then be able to boot into it? I have a constraint, my laptop doesn't have a CD ROM, so I cannot reinstall WIndows, nor can I move around the Windows recovery partition. This is the boot-repair report : http://paste.ubuntu.com/1354254/

    Read the article

  • TXPAUSE : polite waiting for hardware transactional memory

    - by Dave
    Classic locks are an appropriate tool to prevent potentially conflicting operations A and B, invoked by different threads, from running at the same time. In a sense the locks cause either A to run before B or vice-versa. Similarly, we can replace the locks with hardware transactional memory, or use transactional lock elision to leverage potential disjoint access parallelism between A and B. But often we want A to wait until B has run. In a Pthreads environment we'd usually use locks in conjunction with condition variables to implement our "wait until" constraint. MONITOR-MWAIT is another way to wait for a memory location to change, but it only allows us to track one cache line and it's only available on x86. There's no similar "wait until" construct for hardware transactions. At the instruction-set level a simple way to express "wait until" in transactions would be to add a new TXPAUSE instruction that could be used within an active hardware transaction. TXPAUSE would politely stall the invoking thread, possibly surrendering or yielding compute resources, while at the same time continuing to track the transaction's address-set. Once a transaction has executed TXPAUSE it can only abort. Ideally that'd happen when some other thread modifies a variable that's in the transaction's read-set or write-set. And since we're aborting all writes would be discarded. In a sense this gives us multi-location MWAIT but with much more flexibility. We could also augment the TXPAUSE with a cycle-count bound to cap the time spent stalled. I should note that we can already enter a tight spin loop in a transaction to wait for updates to address-set to cause an abort. Assuming that the implementation monitors the address-set via cache-coherence probes, by waiting in this fashion we actually communicate via the probes, and not via memory values. That is the updating thread signals the waiter via probes instead of by traditional memory values. But TXPAUSE gives us a polite way to spin.

    Read the article

  • What would be a good opensource software for graphing circles, circular regions, and regions bounded by curves and/or arcs?

    - by Michael Dykes
    Hullo all. I have just started s job with Chegg, and my 1st assignment has me writign solutions for Stewart's Essential Calculus. I am dealing with the chapter on multiple integration, and need a good open-source software that I can easily use to draw regions (domains) that would require multiple integrals: i.e. circular regions, portions of circles, regions bounded by curves and/or arcs, and at some point 3D pictures. In most of these cases, I am not working with an exact equation or perhaps need to draw the region bounded by r (radius) between 1 and 2, and the angle theta bounded between pi/4 and 3 pi/4. I am not too terribly familiar with programs like Corel Draw, but the documents I have from this company suggest Corel Draw. So I think I am looking for an open-source free program like Corel Draw or something similar. Any additional suggestions would also be appreciated. I know I can do most, if not all of this using TikZ, but the learning curve is a bit steep, and at the moment I an on a time constraint. Thanks.

    Read the article

  • Using ConcurrentQueue for thread-safe Performance Bookkeeping.

    - by Strenium
    Just a small tidbit that's sprung up today. I had to book-keep and emit diagnostics for the average thread performance in a highly-threaded code over a period of last X number of calls and no more. Need of the day: a thread-safe, self-managing stats container. Since .NET 4.0 introduced new thread-safe 'Collections.Concurrent' objects and I've been using them frequently - the one in particular seemed like a good fit for storing each threads' performance data - ConcurrentQueue. But I wanted to store only the most recent X# of calls and since the ConcurrentQueue currently does not support size constraint I had to come up with my own generic version which attempts to restrict usage to numeric types only: unfortunately there is no IArithmetic-like interface which constrains to only numeric types – so the constraints here here aren't as elegant as they could be. (Note the use of the Average() method, of course you can use others as well as make your own).   FIFO FixedSizedConcurrentQueue using System;using System.Collections.Concurrent;using System.Linq; namespace xxxxx.Data.Infrastructure{    [Serializable]    public class FixedSizedConcurrentQueue<T> where T : struct, IConvertible, IComparable<T>    {        private FixedSizedConcurrentQueue() { }         public FixedSizedConcurrentQueue(ConcurrentQueue<T> queue)        {            _queue = queue;        }         ConcurrentQueue<T> _queue = new ConcurrentQueue<T>();         public int Size { get { return _queue.Count; } }        public double Average { get { return _queue.Average(arg => Convert.ToInt32(arg)); } }         public int Limit { get; set; }        public void Enqueue(T obj)        {            _queue.Enqueue(obj);            lock (this)            {                T @out;                while (_queue.Count > Limit) _queue.TryDequeue(out @out);            }        }    } }   The usage case is straight-forward, in this case I’m using a FIFO queue of maximum size of 200 to store doubles to which I simply Enqueue() the calculated rates: Usage var RateQueue = new FixedSizedConcurrentQueue<double>(new ConcurrentQueue<double>()) { Limit = 200 }; /* greater size == longer history */   That’s about it. Happy coding!

    Read the article

  • How to create distinct set from other sets?

    - by shyam_baidya
    While solving the problems on Techgig.com, I got struck with one one of the problem. The problem is like this: A company organizes two trips for their employees in a year. They want to know whether all the employees can be sent on the trip or not. The condition is like, no employee can go on both the trips. Also to determine which employee can go together the constraint is that the employees who have worked together in past won't be in the same group. Examples of the problems: Suppose the work history is given as follows: {(1,2),(2,3),(3,4)}; then it is possible to accommodate all the four employees in two trips (one trip consisting of employees 1& 3 and other having employees 2 & 4). Neither of the two employees in the same trip have worked together in past. Suppose the work history is given as {(1,2),(1,3),(2,3)} then there is no way possible to have two trips satisfying the company rule and accommodating all the employees. Can anyone tell me how to proceed on this problem?

    Read the article

  • Reportviewer stored procedure [closed]

    - by Liesl
    I want to write a stored procedure for my invoice reportviewer. After invoice is generated in reportviewer it must also add the data to my Invoice table. This is all my tables in my database: CREATE TABLE [dbo].[Waybills]( [WaybillID] [int] IDENTITY(1,1) NOT NULL, [SenderName] [varchar](50) NULL, [SenderAddress] [varchar](50) NULL, [SenderContact] [int] NULL, [ReceiverName] [varchar](50) NULL, [ReceiverAddress] [varchar](50) NULL, [ReceiverContact] [int] NULL, [UnitDescription] [varchar](50) NULL, [UnitWeight] [int] NULL, [DateReceived] [date] NULL, [Payee] [varchar](50) NULL, [CustomerID] [int] NULL, PRIMARY KEY CLUSTERED CREATE TABLE [dbo].[Customer]( [CustomerID] [int] IDENTITY(1,1) NOT NULL, [customerName] [varchar](30) NULL, [CustomerAddress] [varchar](30) NULL, [CustomerContact] [varchar](30) NULL, [VatNo] [int] NULL, CONSTRAINT [PK_Customer] PRIMARY KEY CLUSTERED ) CREATE TABLE [dbo].[Cycle]( [CycleID] [int] IDENTITY(1,1) NOT NULL, [CycleNumber] [int] NULL, [StartDate] [date] NULL, [EndDate] [date] NULL ) ON [PRIMARY] CREATE TABLE [dbo].[Payments]( [PaymentID] [int] IDENTITY(1,1) NOT NULL, [Amount] [money] NULL, [PaymentDate] [date] NULL, [CustomerID] [int] NULL, PRIMARY KEY CLUSTERED Create table Invoices ( InvoiceID int IDENTITY(1,1), InvoiceNumber int, InvoiceDate date, BalanceBroughtForward money, OutstandingAmount money, CustomerID int, WaybillID int, PaymentID int, CycleID int PRIMARY KEY (InvoiceID), FOREIGN KEY (CustomerID) REFERENCES Customer(CustomerID), FOREIGN KEY (WaybillID) REFERENCES Waybills(WaybillID), FOREIGN KEY (PaymentID) REFERENCES Payments(PaymentID), FOREIGN KEY (CycleID) REFERENCES Cycle(CycleID) ) I want my sp to find all waybills for specific customer in a specific cycle with payments made from this client. All this data must then be added into the INVOICE table. Can someone please help me or show me on the right direction? create proc GenerateInvoice @StartDate date, @EndDate date, @Payee varchar(30) AS SELECT Waybills.WaybillNumber Waybills.SenderName, Waybills.SenderAddress, Waybills.SenderContact, Waybills.ReceiverName, Waybills.ReceiverAddress, Waybills.ReceiverContact, Waybills.UnitDescription, Waybills.UnitWeight, Waybills.DateReceived, Waybills.Payee, Payments.Amount, Payments.PaymentDate, Cycle.CycleNumber, Cycle.StartDate, Cycle.EndDate FROM Waybills CROSS JOIN Payments CROSS JOIN Cycle WHERE Waybills.ReceiverName = @Payee AND (Waybills.DateReceived BETWEEN (@StartDate) AND (@EndDate)) Insert Into Invoices (InvoiceNumber, InvoiceDate, BalanceBroughtForward, OutstandingAmount) Values (@InvoiceNumber, @InvoiceDate, @BalanceBroughtForward, @ OutstandingAmount) go

    Read the article

  • How can I resolve Hibernate 3's ConstraintViolationException when updating a Persistent Entity's Col

    - by Tim Visher
    I'm trying to discover why two nearly identical class sets are behaving different from Hibernate 3's perspective. I'm fairly new to Hibernate in general and I'm hoping I'm missing something fairly obvious about the mappings or timing issues or something along those lines but I spent the whole day yesterday staring at the two sets and any differences that would lead to one being able to be persisted and the other not completely escaped me. I appologize in advance for the length of this question but it all hinges around some pretty specific implementation details. I have the following class mapped with Annotations and managed by Hibernate 3.? (if the specific specific version turns out to be pertinent, I'll figure out what it is). Java version is 1.6. ... @Embeddable public class JobStateChange implements Comparable<JobStateChange> { @Temporal(TemporalType.TIMESTAMP) @Column(nullable = false) private Date date; @Enumerated(EnumType.STRING) @Column(nullable = false, length = JobState.FIELD_LENGTH) private JobState state; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "acting_user_id", nullable = false) private User actingUser; public JobStateChange() { } @Override public int compareTo(final JobStateChange o) { return this.date.compareTo(o.date); } @Override public boolean equals(final Object obj) { if (this == obj) { return true; } else if (!(obj instanceof JobStateChange)) { return false; } JobStateChange candidate = (JobStateChange) obj; return this.state == candidate.state && this.actingUser.equals(candidate.getUser()) && this.date.equals(candidate.getDate()); } @Override public int hashCode() { return this.state.hashCode() + this.actingUser.hashCode() + this.date.hashCode(); } } It is mapped as a Hibernate CollectionOfElements in the class Job as follows: ... @Entity @Table( name = "job", uniqueConstraints = { @UniqueConstraint( columnNames = { "agency", //Job Name "payment_type", //Job Name "payment_file", //Job Name "date_of_payment", "payment_control_number", "truck_number" }) }) public class Job implements Serializable { private static final long serialVersionUID = -1131729422634638834L; ... @org.hibernate.annotations.CollectionOfElements @JoinTable(name = "job_state", joinColumns = @JoinColumn(name = "job_id")) @Sort(type = SortType.NATURAL) private final SortedSet<JobStateChange> stateChanges = new TreeSet<JobStateChange>(); ... public void advanceState( final User actor, final Date date) { JobState nextState; LOGGER.debug("Current state of {} is {}.", this, this.getCurrentState()); if (null == this.currentState) { nextState = JobState.BEGINNING; } else { if (!this.isAdvanceable()) { throw new IllegalAdvancementException(this.currentState.illegalAdvancementStateMessage); } if (this.currentState.isDivergent()) { nextState = this.currentState.getNextState(this); } else { nextState = this.currentState.getNextState(); } } JobStateChange stateChange = new JobStateChange(nextState, actor, date); this.setCurrentState(stateChange.getState()); this.stateChanges.add(stateChange); LOGGER.debug("Advanced {} to {}", this, this.getCurrentState()); } private void setCurrentState(final JobState jobState) { this.currentState = jobState; } boolean isAdvanceable() { return this.getCurrentState().isAdvanceable(this); } ... @Override public boolean equals(final Object obj) { if (obj == this) { return true; } else if (!(obj instanceof Job)) { return false; } Job otherJob = (Job) obj; return this.getName().equals(otherJob.getName()) && this.getDateOfPayment().equals(otherJob.getDateOfPayment()) && this.getPaymentControlNumber().equals(otherJob.getPaymentControlNumber()) && this.getTruckNumber().equals(otherJob.getTruckNumber()); } @Override public int hashCode() { return this.getName().hashCode() + this.getDateOfPayment().hashCode() + this.getPaymentControlNumber().hashCode() + this.getTruckNumber().hashCode(); } ... } The purpose of JobStateChange is to record when the Job moves through a series of State Changes that are outline in JobState as enums which know about advancement and decrement rules. The interface used to advance Jobs through a series of states is to call Job.advanceState() with a Date and a User. If the Job is advanceable according to rules coded in the enum, then a new StateChange is added to the SortedSet and everyone's happy. If not, an IllegalAdvancementException is thrown. The DDL this generates is as follows: ... drop table job; drop table job_state; ... create table job ( id bigint generated by default as identity, current_state varchar(25), date_of_payment date not null, beginningCheckNumber varchar(8) not null, item_count integer, agency varchar(10) not null, payment_file varchar(25) not null, payment_type varchar(25) not null, endingCheckNumber varchar(8) not null, payment_control_number varchar(4) not null, truck_number varchar(255) not null, wrapping_system_type varchar(15) not null, printer_id bigint, primary key (id), unique (agency, payment_type, payment_file, date_of_payment, payment_control_number, truck_number) ); create table job_state ( job_id bigint not null, acting_user_id bigint not null, date timestamp not null, state varchar(25) not null, primary key (job_id, acting_user_id, date, state) ); ... alter table job add constraint FK19BBD12FB9D70 foreign key (printer_id) references printer; alter table job_state add constraint FK57C2418FED1F0D21 foreign key (acting_user_id) references app_user; alter table job_state add constraint FK57C2418FABE090B3 foreign key (job_id) references job; ... The database is seeded with the following data prior to running tests ... insert into job (id, agency, payment_type, payment_file, payment_control_number, date_of_payment, beginningCheckNumber, endingCheckNumber, item_count, current_state, printer_id, wrapping_system_type, truck_number) values (-3, 'RRB', 'Monthly', 'Monthly','4501','1998-12-01 08:31:16' , '00000001','00040000', 40000, 'UNASSIGNED', null, 'KERN', '02'); insert into job_state (job_id, acting_user_id, date, state) values (-3, -1, '1998-11-30 08:31:17', 'UNASSIGNED'); ... After the database schema is automatically generated and rebuilt by the Hibernate tool. The following test runs fine up until the call to Session.flush() ... @ContextConfiguration(locations = { "/applicationContext-data.xml", "/applicationContext-service.xml" }) public class JobDaoIntegrationTest extends AbstractTransactionalJUnit4SpringContextTests { @Autowired private JobDao jobDao; @Autowired private SessionFactory sessionFactory; @Autowired private UserService userService; @Autowired private PrinterService printerService; ... @Test public void saveJob_JobAdvancedToAssigned_AllExpectedStateChanges() { //Get an unassigned Job Job job = this.jobDao.getJob(-3L); assertEquals(JobState.UNASSIGNED, job.getCurrentState()); Date advancedToUnassigned = new GregorianCalendar(1998, 10, 30, 8, 31, 17).getTime(); assertEquals(advancedToUnassigned, job.getStateChange(JobState.UNASSIGNED).getDate()); //Satisfy advancement constraints and advance job.setPrinter(this.printerService.getPrinter(-1L)); Date advancedToAssigned = new Date(); job.advanceState( this.userService.getUserByUsername("admin"), advancedToAssigned); assertEquals(JobState.ASSIGNED, job.getCurrentState()); assertEquals(advancedToUnassigned, job.getStateChange(JobState.UNASSIGNED).getDate()); assertEquals(advancedToAssigned, job.getStateChange(JobState.ASSIGNED).getDate()); //Persist to DB this.sessionFactory.getCurrentSession().flush(); ... } ... } The error thrown is SQLCODE=-803, SQLSTATE=23505: could not insert collection rows: [jaci.model.job.Job.stateChanges#-3] org.hibernate.exception.ConstraintViolationException: could not insert collection rows: [jaci.model.job.Job.stateChanges#-3] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:94) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.persister.collection.AbstractCollectionPersister.insertRows(AbstractCollectionPersister.java:1416) at org.hibernate.action.CollectionUpdateAction.execute(CollectionUpdateAction.java:86) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:279) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:170) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1027) at jaci.dao.JobDaoIntegrationTest.saveJob_JobAdvancedToAssigned_AllExpectedStateChanges(JobDaoIntegrationTest.java:98) at org.springframework.test.context.junit4.SpringTestMethod.invoke(SpringTestMethod.java:160) at org.springframework.test.context.junit4.SpringMethodRoadie.runTestMethod(SpringMethodRoadie.java:233) at org.springframework.test.context.junit4.SpringMethodRoadie$RunBeforesThenTestThenAfters.run(SpringMethodRoadie.java:333) at org.springframework.test.context.junit4.SpringMethodRoadie.runWithRepetitions(SpringMethodRoadie.java:217) at org.springframework.test.context.junit4.SpringMethodRoadie.runTest(SpringMethodRoadie.java:197) at org.springframework.test.context.junit4.SpringMethodRoadie.run(SpringMethodRoadie.java:143) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.invokeTestMethod(SpringJUnit4ClassRunner.java:160) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:97) Caused by: com.ibm.db2.jcc.b.lm: DB2 SQL Error: SQLCODE=-803, SQLSTATE=23505, SQLERRMC=1;ACI_APP.JOB_STATE, DRIVER=3.50.152 at com.ibm.db2.jcc.b.wc.a(wc.java:575) at com.ibm.db2.jcc.b.wc.a(wc.java:57) at com.ibm.db2.jcc.b.wc.a(wc.java:126) at com.ibm.db2.jcc.b.tk.b(tk.java:1593) at com.ibm.db2.jcc.b.tk.c(tk.java:1576) at com.ibm.db2.jcc.t4.db.k(db.java:353) at com.ibm.db2.jcc.t4.db.a(db.java:59) at com.ibm.db2.jcc.t4.t.a(t.java:50) at com.ibm.db2.jcc.t4.tb.b(tb.java:200) at com.ibm.db2.jcc.b.uk.Gb(uk.java:2355) at com.ibm.db2.jcc.b.uk.e(uk.java:3129) at com.ibm.db2.jcc.b.uk.zb(uk.java:568) at com.ibm.db2.jcc.b.uk.executeUpdate(uk.java:551) at org.hibernate.jdbc.NonBatchingBatcher.addToBatch(NonBatchingBatcher.java:46) at org.hibernate.persister.collection.AbstractCollectionPersister.insertRows(AbstractCollectionPersister.java:1389) Therein lies my problem… A nearly identical Class set (in fact, so identical that I've been chomping at the bit to make it a single class that serves both business entities) runs absolutely fine. It is identical except for name. Instead of Job it's Web. Instead of JobStateChange it's WebStateChange. Instead of JobState it's WebState. Both Job and Web's SortedSet of StateChanges are mapped as a Hibernate CollectionOfElements. Both are @Embeddable. Both are SortType.Natural. Both are backed by an Enumeration with some advancement rules in it. And yet when a nearly identical test is run for Web, no issue is discovered and the data flushes fine. For the sake of brevity I won't include all of the Web classes here, but I will include the test and if anyone wants to see the actual sources, I'll include them (just leave a comment). The data seed: insert into web (id, stock_type, pallet, pallet_id, date_received, first_icn, last_icn, shipment_id, current_state) values (-1, 'PF', '0011', 'A', '2008-12-31 08:30:02', '000000001', '000080000', -1, 'UNSTAGED'); insert into web_state (web_id, date, state, acting_user_id) values (-1, '2008-12-31 08:30:03', 'UNSTAGED', -1); The test: ... @ContextConfiguration(locations = { "/applicationContext-data.xml", "/applicationContext-service.xml" }) public class WebDaoIntegrationTest extends AbstractTransactionalJUnit4SpringContextTests { @Autowired private WebDao webDao; @Autowired private UserService userService; @Autowired private SessionFactory sessionFactory; ... @Test public void saveWeb_WebAdvancedToNewState_AllExpectedStateChanges() { Web web = this.webDao.getWeb(-1L); Date advancedToUnstaged = new GregorianCalendar(2008, 11, 31, 8, 30, 3).getTime(); assertEquals(WebState.UNSTAGED, web.getCurrentState()); assertEquals(advancedToUnstaged, web.getState(WebState.UNSTAGED).getDate()); Date advancedToStaged = new Date(); web.advanceState( this.userService.getUserByUsername("admin"), advancedToStaged); this.sessionFactory.getCurrentSession().flush(); web = this.webDao.getWeb(web.getId()); assertEquals( "Web should have moved to STAGED State.", WebState.STAGED, web.getCurrentState()); assertEquals(advancedToUnstaged, web.getState(WebState.UNSTAGED).getDate()); assertEquals(advancedToStaged, web.getState(WebState.STAGED).getDate()); assertNotNull(web.getState(WebState.UNSTAGED)); assertNotNull(web.getState(WebState.STAGED)); } ... } As you can see, I assert that the Web was reconstituted the way I expect, I advance it, flush it to the DB, and then re-get it and verify that the states are as I expect. Everything works perfectly. Not so with Job. A possibly pertinent detail: the reconstitution code works fine if I cease to map JobStateChange.data as a TIMESTAMP and instead as a DATE, and ensure that all of the StateChanges always occur on different Dates. The problem is that this particular business entity can go through many state changes in a single day and so it needs to be sorted by time stamp rather than by date. If I don't do this then I can't sort the StateChanges correctly. That being said, WebStateChange.date is also mapped as a TIMESTAMP and so I again remain absolutely befuddled as to where this error is arising from. I tried to do a fairly thorough job of giving all of the technical details of the implementation but as this particular question is very implementation specific, if I missed anything just let me know in the comments and I'll include it. Thanks so much for your help! UPDATE: Since it turns out to be important to the solution of my problem, I have to include the pertinent bits of the WebStateChange class as well. ... @Embeddable public class WebStateChange implements Comparable<WebStateChange> { @Temporal(TemporalType.TIMESTAMP) @Column(nullable = false) private Date date; @Enumerated(EnumType.STRING) @Column(nullable = false, length = WebState.FIELD_LENGTH) private WebState state; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "acting_user_id", nullable = false) private User actingUser; ... WebStateChange( final WebState state, final User actingUser, final Date date) { ExceptionUtils.illegalNullArgs(state, actingUser, date); this.state = state; this.actingUser = actingUser; this.date = new Date(date.getTime()); } @Override public int compareTo(final WebStateChange otherStateChange) { return this.date.compareTo(otherStateChange.date); } @Override public boolean equals(final Object candidate) { if (this == candidate) { return true; } else if (!(candidate instanceof WebStateChange)) { return false; } WebStateChange candidateWebState = (WebStateChange) candidate; return this.getState() == candidateWebState.getState() && this.getUser().equals(candidateWebState.getUser()) && this.getDate().equals(candidateWebState.getDate()); } @Override public int hashCode() { return this.getState().hashCode() + this.getUser().hashCode() + this.getDate().hashCode(); } ... }

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • LLBLGen Pro feature highlights: automatic element name construction

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) One of the things one might take for granted but which has a huge impact on the time spent in an entity modeling environment is the way the system creates names for elements out of the information provided, in short: automatic element name construction. Element names are created in both directions of modeling: database first and model first and the more names the system can create for you without you having to rename them, the better. LLBLGen Pro has a rich, fine grained system for creating element names out of the meta-data available, which I'll describe more in detail below. First the model element related element naming features are highlighted, in the section Automatic model element naming features and after that I'll go more into detail about the relational model element naming features LLBLGen Pro has to offer in the section Automatic relational model element naming features. Automatic model element naming features When working database first, the element names in the model, e.g. entity names, entity field names and so on, are in general determined from the relational model element (e.g. table, table field) they're mapped on, as the model elements are reverse engineered from these relational model elements. It doesn't take rocket science to automatically name an entity Customer if the entity was created after reverse engineering a table named Customer. It gets a little trickier when the entity which was created by reverse engineering a table called TBL_ORDER_LINES has to be named 'OrderLine' automatically. Automatic model element naming also takes into effect with model first development, where some settings are used to provide you with a default name, e.g. in the case of navigator name creation when you create a new relationship. The features below are available to you in the Project Settings. Open Project Settings on a loaded project and navigate to Conventions -> Element Name Construction. Strippers! The above example 'TBL_ORDER_LINES' shows that some parts of the table name might not be needed for name creation, in this case the 'TBL_' prefix. Some 'brilliant' DBAs even add suffixes to table names, fragments you might not want to appear in the entity names. LLBLGen Pro offers you to define both prefix and suffix fragments to strip off of table, view, stored procedure, parameter, table field and view field names. In the example above, the fragment 'TBL_' is a good candidate for such a strip pattern. You can specify more than one pattern for e.g. the table prefix strip pattern, so even a really messy schema can still be used to produce clean names. Underscores Be Gone Another thing you might get rid of are underscores. After all, most naming schemes for entities and their classes use PasCal casing rules and don't allow for underscores to appear. LLBLGen Pro can automatically strip out underscores for you. It's an optional feature, so if you like the underscores, you're not forced to see them go: LLBLGen Pro will leave them alone when ordered to to so. PasCal everywhere... or not, your call LLBLGen Pro can automatically PasCal case names on word breaks. It determines word breaks in a couple of ways: a space marks a word break, an underscore marks a word break and a case difference marks a word break. It will remove spaces in all cases, and based on the underscore removal setting, keep or remove the underscores, and upper-case the first character of a word break fragment, and lower case the rest. Say, we keep the defaults, which is remove underscores and PasCal case always and strip the TBL_ fragment, we get with our example TBL_ORDER_LINES, after stripping TBL_ from the table name two word fragments: ORDER and LINES. The underscores are removed, the first character of each fragment is upper-cased, the rest lower-cased, so this results in OrderLines. Almost there! Pluralization and Singularization In general entity names are singular, like Customer or OrderLine so LLBLGen Pro offers a way to singularize the names. This will convert OrderLines, the result we got after the PasCal casing functionality, into OrderLine, exactly what we're after. Show me the patterns! There are other situations in which you want more flexibility. Say, you have an entity Customer and an entity Order and there's a foreign key constraint defined from the target of Order and the target of Customer. This foreign key constraint results in a 1:n relationship between the entities Customer and Order. A relationship has navigators mapped onto the relationship in both entities the relationship is between. For this particular relationship we'd like to have Customer as navigator in Order and Orders as navigator in Customer, so the relationship becomes Customer.Orders 1:n Order.Customer. To control the naming of these navigators for the various relationship types, LLBLGen Pro defines a set of patterns which allow you, using macros, to define how the auto-created navigator names will look like. For example, if you rather have Customer.OrderCollection, you can do so, by changing the pattern from {$EndEntityName$P} to {$EndEntityName}Collection. The $P directive makes sure the name is pluralized, which is not what you want if you're going for <EntityName>Collection, hence it's removed. When working model first, it's a given you'll create foreign key fields along the way when you define relationships. For example, you've defined two entities: Customer and Order, and they have their fields setup properly. Now you want to define a relationship between them. This will automatically create a foreign key field in the Order entity, which reflects the value of the PK field in Customer. (No worries if you hate the foreign key fields in your classes, on NHibernate and EF these can be hidden in the generated code if you want to). A specific pattern is available for you to direct LLBLGen Pro how to name this foreign key field. For example, if all your entities have Id as PK field, you might want to have a different name than Id as foreign key field. In our Customer - Order example, you might want to have CustomerId instead as foreign key name in Order. The pattern for foreign key fields gives you that freedom. Abbreviations... make sense of OrdNr and friends I already described word breaks in the PasCal casing paragraph, how they're used for the PasCal casing in the constructed name. Word breaks are used for another neat feature LLBLGen Pro has to offer: abbreviation support. Burt, your friendly DBA in the dungeons below the office has a hate-hate relationship with his keyboard: he can't stand it: typing is something he avoids like the plague. This has resulted in tables and fields which have names which are very short, but also very unreadable. Example: our TBL_ORDER_LINES example has a lovely field called ORD_NR. What you would like to see in your fancy new OrderLine entity mapped onto this table is a field called OrderNumber, not a field called OrdNr. What you also like is to not have to rename that field manually. There are better things to do with your time, after all. LLBLGen Pro has you covered. All it takes is to define some abbreviation - full word pairs and during reverse engineering model elements from tables/views, LLBLGen Pro will take care of the rest. For the ORD_NR field, you need two values: ORD as abbreviation and Order as full word, and NR as abbreviation and Number as full word. LLBLGen Pro will now convert every word fragment found with the word breaks which matches an abbreviation to the given full word. They're case sensitive and can be found in the Project Settings: Navigate to Conventions -> Element Name Construction -> Abbreviations. Automatic relational model element naming features Not everyone works database first: it may very well be the case you start from scratch, or have to add additional tables to an existing database. For these situations, it's key you have the flexibility that you can control the created table names and table fields without any work: let the designer create these names based on the entity model you defined and a set of rules. LLBLGen Pro offers several features in this area, which are described in more detail below. These features are found in Project Settings: navigate to Conventions -> Model First Development. Underscores, welcome back! Not every database is case insensitive, and not every organization requires PasCal cased table/field names, some demand all lower or all uppercase names with underscores at word breaks. Say you create an entity model with an entity called OrderLine. You work with Oracle and your organization requires underscores at word breaks: a table created from OrderLine should be called ORDER_LINE. LLBLGen Pro allows you to do that: with a simple checkbox you can order LLBLGen Pro to insert an underscore at each word break for the type of database you're working with: case sensitive or case insensitive. Checking the checkbox Insert underscore at word break case insensitive dbs will let LLBLGen Pro create a table from the entity called Order_Line. Half-way there, as there are still lower case characters there and you need all caps. No worries, see below Casing directives so everyone can sleep well at night For case sensitive databases and case insensitive databases there is one setting for each of them which controls the casing of the name created from a model element (e.g. a table created from an entity definition using the auto-mapping feature). The settings can have the following values: AsProjectElement, AllUpperCase or AllLowerCase. AsProjectElement is the default, and it keeps the casing as-is. In our example, we need to get all upper case characters, so we select AllUpperCase for the setting for case sensitive databases. This will produce the name ORDER_LINE. Sequence naming after a pattern Some databases support sequences, and using model-first development it's key to have sequences, when needed, to be created automatically and if possible using a name which shows where they're used. Say you have an entity Order and you want to have the PK values be created by the database using a sequence. The database you're using supports sequences (e.g. Oracle) and as you want all numeric PK fields to be sequenced, you have enabled this by the setting Auto assign sequences to integer pks. When you're using LLBLGen Pro's auto-map feature, to create new tables and constraints from the model, it will create a new table, ORDER, based on your settings I previously discussed above, with a PK field ID and it also creates a sequence, SEQ_ORDER, which is auto-assigns to the ID field mapping. The name of the sequence is created by using a pattern, defined in the Model First Development setting Sequence pattern, which uses plain text and macros like with the other patterns previously discussed. Grouping and schemas When you start from scratch, and you're working model first, the tables created by LLBLGen Pro will be in a catalog and / or schema created by LLBLGen Pro as well. If you use LLBLGen Pro's grouping feature, which allows you to group entities and other model elements into groups in the project (described in a future blog post), you might want to have that group name reflected in the schema name the targets of the model elements are in. Say you have a model with a group CRM and a group HRM, both with entities unique for these groups, e.g. Employee in HRM, Customer in CRM. When auto-mapping this model to create tables, you might want to have the table created for Employee in the HRM schema but the table created for Customer in the CRM schema. LLBLGen Pro will do just that when you check the setting Set schema name after group name to true (default). This gives you total control over where what is placed in the database from your model. But I want plural table names... and TBL_ prefixes! For now we follow best practices which suggest singular table names and no prefixes/suffixes for names. Of course that won't keep everyone happy, so we're looking into making it possible to have that in a future version. Conclusion LLBLGen Pro offers a variety of options to let the modeling system do as much work for you as possible. Hopefully you enjoyed this little highlight post and that it has given you new insights in the smaller features available to you in LLBLGen Pro, ones you might not have thought off in the first place. Enjoy!

    Read the article

  • Seeking on a Heap, and Two Useful DMVs

    - by Paul White
    So far in this mini-series on seeks and scans, we have seen that a simple ‘seek’ operation can be much more complex than it first appears.  A seek can contain one or more seek predicates – each of which can either identify at most one row in a unique index (a singleton lookup) or a range of values (a range scan).  When looking at a query plan, we will often need to look at the details of the seek operator in the Properties window to see how many operations it is performing, and what type of operation each one is.  As you saw in the first post in this series, the number of hidden seeking operations can have an appreciable impact on performance. Measuring Seeks and Scans I mentioned in my last post that there is no way to tell from a graphical query plan whether you are seeing a singleton lookup or a range scan.  You can work it out – if you happen to know that the index is defined as unique and the seek predicate is an equality comparison, but there’s no separate property that says ‘singleton lookup’ or ‘range scan’.  This is a shame, and if I had my way, the query plan would show different icons for range scans and singleton lookups – perhaps also indicating whether the operation was one or more of those operations underneath the covers. In light of all that, you might be wondering if there is another way to measure how many seeks of either type are occurring in your system, or for a particular query.  As is often the case, the answer is yes – we can use a couple of dynamic management views (DMVs): sys.dm_db_index_usage_stats and sys.dm_db_index_operational_stats. Index Usage Stats The index usage stats DMV contains counts of index operations from the perspective of the Query Executor (QE) – the SQL Server component that is responsible for executing the query plan.  It has three columns that are of particular interest to us: user_seeks – the number of times an Index Seek operator appears in an executed plan user_scans – the number of times a Table Scan or Index Scan operator appears in an executed plan user_lookups – the number of times an RID or Key Lookup operator appears in an executed plan An operator is counted once per execution (generating an estimated plan does not affect the totals), so an Index Seek that executes 10,000 times in a single plan execution adds 1 to the count of user seeks.  Even less intuitively, an operator is also counted once per execution even if it is not executed at all.  I will show you a demonstration of each of these things later in this post. Index Operational Stats The index operational stats DMV contains counts of index and table operations from the perspective of the Storage Engine (SE).  It contains a wealth of interesting information, but the two columns of interest to us right now are: range_scan_count – the number of range scans (including unrestricted full scans) on a heap or index structure singleton_lookup_count – the number of singleton lookups in a heap or index structure This DMV counts each SE operation, so 10,000 singleton lookups will add 10,000 to the singleton lookup count column, and a table scan that is executed 5 times will add 5 to the range scan count. The Test Rig To explore the behaviour of seeks and scans in detail, we will need to create a test environment.  The scripts presented here are best run on SQL Server 2008 Developer Edition, but the majority of the tests will work just fine on SQL Server 2005.  A couple of tests use partitioning, but these will be skipped if you are not running an Enterprise-equivalent SKU.  Ok, first up we need a database: USE master; GO IF DB_ID('ScansAndSeeks') IS NOT NULL DROP DATABASE ScansAndSeeks; GO CREATE DATABASE ScansAndSeeks; GO USE ScansAndSeeks; GO ALTER DATABASE ScansAndSeeks SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE ScansAndSeeks SET AUTO_CLOSE OFF, AUTO_SHRINK OFF, AUTO_CREATE_STATISTICS OFF, AUTO_UPDATE_STATISTICS OFF, PARAMETERIZATION SIMPLE, READ_COMMITTED_SNAPSHOT OFF, RESTRICTED_USER ; Notice that several database options are set in particular ways to ensure we get meaningful and reproducible results from the DMVs.  In particular, the options to auto-create and update statistics are disabled.  There are also three stored procedures, the first of which creates a test table (which may or may not be partitioned).  The table is pretty much the same one we used yesterday: The table has 100 rows, and both the key_col and data columns contain the same values – the integers from 1 to 100 inclusive.  The table is a heap, with a non-clustered primary key on key_col, and a non-clustered non-unique index on the data column.  The only reason I have used a heap here, rather than a clustered table, is so I can demonstrate a seek on a heap later on.  The table has an extra column (not shown because I am too lazy to update the diagram from yesterday) called padding – a CHAR(100) column that just contains 100 spaces in every row.  It’s just there to discourage SQL Server from choosing table scan over an index + RID lookup in one of the tests. The first stored procedure is called ResetTest: CREATE PROCEDURE dbo.ResetTest @Partitioned BIT = 'false' AS BEGIN SET NOCOUNT ON ; IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; IF @Partitioned = 'true' BEGIN -- Enterprise, Trial, or Developer -- required for partitioning tests IF SERVERPROPERTY('EngineEdition') = 3 BEGIN EXECUTE (' DROP TABLE dbo.Example ; IF EXISTS ( SELECT 1 FROM sys.partition_schemes WHERE name = N''PS'' ) DROP PARTITION SCHEME PS ; IF EXISTS ( SELECT 1 FROM sys.partition_functions WHERE name = N''PF'' ) DROP PARTITION FUNCTION PF ; CREATE PARTITION FUNCTION PF (INTEGER) AS RANGE RIGHT FOR VALUES (20, 40, 60, 80, 100) ; CREATE PARTITION SCHEME PS AS PARTITION PF ALL TO ([PRIMARY]) ; CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ON PS (key_col); '); END ELSE BEGIN RAISERROR('Invalid SKU for partition test', 16, 1); RETURN; END; END ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; END; GO The second stored procedure, ShowStats, displays information from the Index Usage Stats and Index Operational Stats DMVs: CREATE PROCEDURE dbo.ShowStats @Partitioned BIT = 'false' AS BEGIN -- Index Usage Stats DMV (QE) SELECT index_name = ISNULL(I.name, I.type_desc), scans = IUS.user_scans, seeks = IUS.user_seeks, lookups = IUS.user_lookups FROM sys.dm_db_index_usage_stats AS IUS JOIN sys.indexes AS I ON I.object_id = IUS.object_id AND I.index_id = IUS.index_id WHERE IUS.database_id = DB_ID(N'ScansAndSeeks') AND IUS.object_id = OBJECT_ID(N'dbo.Example', N'U') ORDER BY I.index_id ; -- Index Operational Stats DMV (SE) IF @Partitioned = 'true' SELECT index_name = ISNULL(I.name, I.type_desc), partitions = COUNT(IOS.partition_number), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; ELSE SELECT index_name = ISNULL(I.name, I.type_desc), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; END; The final stored procedure, RunTest, executes a query written against the example table: CREATE PROCEDURE dbo.RunTest @SQL VARCHAR(8000), @Partitioned BIT = 'false' AS BEGIN -- No execution plan yet SET STATISTICS XML OFF ; -- Reset the test environment EXECUTE dbo.ResetTest @Partitioned ; -- Previous call will throw an error if a partitioned -- test was requested, but SKU does not support it IF @@ERROR = 0 BEGIN -- IO statistics and plan on SET STATISTICS XML, IO ON ; -- Test statement EXECUTE (@SQL) ; -- Plan and IO statistics off SET STATISTICS XML, IO OFF ; EXECUTE dbo.ShowStats @Partitioned; END; END; The Tests The first test is a simple scan of the heap table: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example'; The top result set comes from the Index Usage Stats DMV, so it is the Query Executor’s (QE) view.  The lower result is from Index Operational Stats, which shows statistics derived from the actions taken by the Storage Engine (SE).  We see that QE performed 1 scan operation on the heap, and SE performed a single range scan.  Let’s try a single-value equality seek on a unique index next: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 32'; This time we see a single seek on the non-clustered primary key from QE, and one singleton lookup on the same index by the SE.  Now for a single-value seek on the non-unique non-clustered index: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32'; QE shows a single seek on the non-clustered non-unique index, but SE shows a single range scan on that index – not the singleton lookup we saw in the previous test.  That makes sense because we know that only a single-value seek into a unique index is a singleton seek.  A single-value seek into a non-unique index might retrieve any number of rows, if you think about it.  The next query is equivalent to the IN list example seen in the first post in this series, but it is written using OR (just for variety, you understand): EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32 OR data = 33'; The plan looks the same, and there’s no difference in the stats recorded by QE, but the SE shows two range scans.  Again, these are range scans because we are looking for two values in the data column, which is covered by a non-unique index.  I’ve added a snippet from the Properties window to show that the query plan does show two seek predicates, not just one.  Now let’s rewrite the query using BETWEEN: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data BETWEEN 32 AND 33'; Notice the seek operator only has one predicate now – it’s just a single range scan from 32 to 33 in the index – as the SE output shows.  For the next test, we will look up four values in the key_col column: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col IN (2,4,6,8)'; Just a single seek on the PK from the Query Executor, but four singleton lookups reported by the Storage Engine – and four seek predicates in the Properties window.  On to a more complex example: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WITH (INDEX([PK dbo.Example key_col])) WHERE key_col BETWEEN 1 AND 8'; This time we are forcing use of the non-clustered primary key to return eight rows.  The index is not covering for this query, so the query plan includes an RID lookup into the heap to fetch the data and padding columns.  The QE reports a seek on the PK and a lookup on the heap.  The SE reports a single range scan on the PK (to find key_col values between 1 and 8), and eight singleton lookups on the heap.  Remember that a bookmark lookup (RID or Key) is a seek to a single value in a ‘unique index’ – it finds a row in the heap or cluster from a unique RID or clustering key – so that’s why lookups are always singleton lookups, not range scans. Our next example shows what happens when a query plan operator is not executed at all: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 8 AND @@TRANCOUNT < 0'; The Filter has a start-up predicate which is always false (if your @@TRANCOUNT is less than zero, call CSS immediately).  The index seek is never executed, but QE still records a single seek against the PK because the operator appears once in an executed plan.  The SE output shows no activity at all.  This next example is 2008 and above only, I’m afraid: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WHERE key_col BETWEEN 1 AND 30', @Partitioned = 'true'; This is the first example to use a partitioned table.  QE reports a single seek on the heap (yes – a seek on a heap), and the SE reports two range scans on the heap.  SQL Server knows (from the partitioning definition) that it only needs to look at partitions 1 and 2 to find all the rows where key_col is between 1 and 30 – the engine seeks to find the two partitions, and performs a range scan seek on each partition. The final example for today is another seek on a heap – try to work out the output of the query before running it! EXECUTE dbo.RunTest @SQL = 'SELECT TOP (2) WITH TIES * FROM Example WHERE key_col BETWEEN 1 AND 50 ORDER BY $PARTITION.PF(key_col) DESC', @Partitioned = 'true'; Notice the lack of an explicit Sort operator in the query plan to enforce the ORDER BY clause, and the backward range scan. © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • Adding SQL Cache Dependencies to the Loosely coupled .NET Cache Provider

    - by Rhames
    This post adds SQL Cache Dependency support to the loosely coupled .NET Cache Provider that I described in the previous post (http://geekswithblogs.net/Rhames/archive/2012/09/11/loosely-coupled-.net-cache-provider-using-dependency-injection.aspx). The sample code is available on github at https://github.com/RobinHames/CacheProvider.git. Each time we want to apply a cache dependency to a call to fetch or cache a data item we need to supply an instance of the relevant dependency implementation. This suggests an Abstract Factory will be useful to create cache dependencies as needed. We can then use Dependency Injection to inject the factory into the relevant consumer. Castle Windsor provides a typed factory facility that will be utilised to implement the cache dependency abstract factory (see http://docs.castleproject.org/Windsor.Typed-Factory-Facility-interface-based-factories.ashx). Cache Dependency Interfaces First I created a set of cache dependency interfaces in the domain layer, which can be used to pass a cache dependency into the cache provider. ICacheDependency The ICacheDependency interface is simply an empty interface that is used as a parent for the specific cache dependency interfaces. This will allow us to place a generic constraint on the Cache Dependency Factory, and will give us a type that can be passed into the relevant Cache Provider methods. namespace CacheDiSample.Domain.CacheInterfaces { public interface ICacheDependency { } }   ISqlCacheDependency.cs The ISqlCacheDependency interface provides specific SQL caching details, such as a Sql Command or a database connection and table. It is the concrete implementation of this interface that will be created by the factory in passed into the Cache Provider. using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace CacheDiSample.Domain.CacheInterfaces { public interface ISqlCacheDependency : ICacheDependency { ISqlCacheDependency Initialise(string databaseConnectionName, string tableName); ISqlCacheDependency Initialise(System.Data.SqlClient.SqlCommand sqlCommand); } } If we want other types of cache dependencies, such as by key or file, interfaces may be created to support these (the sample code includes an IKeyCacheDependency interface). Modifying ICacheProvider to accept Cache Dependencies Next I modified the exisitng ICacheProvider<T> interface so that cache dependencies may be passed into a Fetch method call. I did this by adding two overloads to the existing Fetch methods, which take an IEnumerable<ICacheDependency> parameter (the IEnumerable allows more than one cache dependency to be included). I also added a method to create cache dependencies. This means that the implementation of the Cache Provider will require a dependency on the Cache Dependency Factory. It is pretty much down to personal choice as to whether this approach is taken, or whether the Cache Dependency Factory is injected directly into the repository or other consumer of Cache Provider. I think, because the cache dependency cannot be used without the Cache Provider, placing the dependency on the factory into the Cache Provider implementation is cleaner. ICacheProvider.cs using System; using System.Collections.Generic;   namespace CacheDiSample.Domain.CacheInterfaces { public interface ICacheProvider<T> { T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry); T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies);   IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry); IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies);   U CreateCacheDependency<U>() where U : ICacheDependency; } }   Cache Dependency Factory Next I created the interface for the Cache Dependency Factory in the domain layer. ICacheDependencyFactory.cs namespace CacheDiSample.Domain.CacheInterfaces { public interface ICacheDependencyFactory { T Create<T>() where T : ICacheDependency;   void Release<T>(T cacheDependency) where T : ICacheDependency; } }   I used the ICacheDependency parent interface as a generic constraint on the create and release methods in the factory interface. Now the interfaces are in place, I moved on to the concrete implementations. ISqlCacheDependency Concrete Implementation The concrete implementation of ISqlCacheDependency will need to provide an instance of System.Web.Caching.SqlCacheDependency to the Cache Provider implementation. Unfortunately this class is sealed, so I cannot simply inherit from this. Instead, I created an interface called IAspNetCacheDependency that will provide a Create method to create an instance of the relevant System.Web.Caching Cache Dependency type. This interface is specific to the ASP.NET implementation of the Cache Provider, so it should be defined in the same layer as the concrete implementation of the Cache Provider (the MVC UI layer in the sample code). IAspNetCacheDependency.cs using System.Web.Caching;   namespace CacheDiSample.CacheProviders { public interface IAspNetCacheDependency { CacheDependency CreateAspNetCacheDependency(); } }   Next, I created the concrete implementation of the ISqlCacheDependency interface. This class also implements the IAspNetCacheDependency interface. This concrete implementation also is defined in the same layer as the Cache Provider implementation. AspNetSqlCacheDependency.cs using System.Web.Caching; using CacheDiSample.Domain.CacheInterfaces;   namespace CacheDiSample.CacheProviders { public class AspNetSqlCacheDependency : ISqlCacheDependency, IAspNetCacheDependency { private string databaseConnectionName;   private string tableName;   private System.Data.SqlClient.SqlCommand sqlCommand;   #region ISqlCacheDependency Members   public ISqlCacheDependency Initialise(string databaseConnectionName, string tableName) { this.databaseConnectionName = databaseConnectionName; this.tableName = tableName; return this; }   public ISqlCacheDependency Initialise(System.Data.SqlClient.SqlCommand sqlCommand) { this.sqlCommand = sqlCommand; return this; }   #endregion   #region IAspNetCacheDependency Members   public System.Web.Caching.CacheDependency CreateAspNetCacheDependency() { if (sqlCommand != null) return new SqlCacheDependency(sqlCommand); else return new SqlCacheDependency(databaseConnectionName, tableName); }   #endregion   } }   ICacheProvider Concrete Implementation The ICacheProvider interface is implemented by the CacheProvider class. This implementation is modified to include the changes to the ICacheProvider interface. First I needed to inject the Cache Dependency Factory into the Cache Provider: private ICacheDependencyFactory cacheDependencyFactory;   public CacheProvider(ICacheDependencyFactory cacheDependencyFactory) { if (cacheDependencyFactory == null) throw new ArgumentNullException("cacheDependencyFactory");   this.cacheDependencyFactory = cacheDependencyFactory; }   Next I implemented the CreateCacheDependency method, which simply passes on the create request to the factory: public U CreateCacheDependency<U>() where U : ICacheDependency { return this.cacheDependencyFactory.Create<U>(); }   The signature of the FetchAndCache helper method was modified to take an additional IEnumerable<ICacheDependency> parameter:   private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies) and the following code added to create the relevant System.Web.Caching.CacheDependency object for any dependencies and pass them to the HttpContext Cache: CacheDependency aspNetCacheDependencies = null;   if (cacheDependencies != null) { if (cacheDependencies.Count() == 1) // We know that the implementations of ICacheDependency will also implement IAspNetCacheDependency // so we can use a cast here and call the CreateAspNetCacheDependency() method aspNetCacheDependencies = ((IAspNetCacheDependency)cacheDependencies.ElementAt(0)).CreateAspNetCacheDependency(); else if (cacheDependencies.Count() > 1) { AggregateCacheDependency aggregateCacheDependency = new AggregateCacheDependency(); foreach (ICacheDependency cacheDependency in cacheDependencies) { // We know that the implementations of ICacheDependency will also implement IAspNetCacheDependency // so we can use a cast here and call the CreateAspNetCacheDependency() method aggregateCacheDependency.Add(((IAspNetCacheDependency)cacheDependency).CreateAspNetCacheDependency()); } aspNetCacheDependencies = aggregateCacheDependency; } }   HttpContext.Current.Cache.Insert(key, value, aspNetCacheDependencies, absoluteExpiry.Value, relativeExpiry.Value);   The full code listing for the modified CacheProvider class is shown below: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Caching; using CacheDiSample.Domain.CacheInterfaces;   namespace CacheDiSample.CacheProviders { public class CacheProvider<T> : ICacheProvider<T> { private ICacheDependencyFactory cacheDependencyFactory;   public CacheProvider(ICacheDependencyFactory cacheDependencyFactory) { if (cacheDependencyFactory == null) throw new ArgumentNullException("cacheDependencyFactory");   this.cacheDependencyFactory = cacheDependencyFactory; }   public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) { return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry, null); }   public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies) { return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry, cacheDependencies); }   public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) { return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry, null); }   public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies) { return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry, cacheDependencies); }   public U CreateCacheDependency<U>() where U : ICacheDependency { return this.cacheDependencyFactory.Create<U>(); }   #region Helper Methods   private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies) { U value; if (!TryGetValue<U>(key, out value)) { value = retrieveData(); if (!absoluteExpiry.HasValue) absoluteExpiry = Cache.NoAbsoluteExpiration;   if (!relativeExpiry.HasValue) relativeExpiry = Cache.NoSlidingExpiration;   CacheDependency aspNetCacheDependencies = null;   if (cacheDependencies != null) { if (cacheDependencies.Count() == 1) // We know that the implementations of ICacheDependency will also implement IAspNetCacheDependency // so we can use a cast here and call the CreateAspNetCacheDependency() method aspNetCacheDependencies = ((IAspNetCacheDependency)cacheDependencies.ElementAt(0)).CreateAspNetCacheDependency(); else if (cacheDependencies.Count() > 1) { AggregateCacheDependency aggregateCacheDependency = new AggregateCacheDependency(); foreach (ICacheDependency cacheDependency in cacheDependencies) { // We know that the implementations of ICacheDependency will also implement IAspNetCacheDependency // so we can use a cast here and call the CreateAspNetCacheDependency() method aggregateCacheDependency.Add( ((IAspNetCacheDependency)cacheDependency).CreateAspNetCacheDependency()); } aspNetCacheDependencies = aggregateCacheDependency; } }   HttpContext.Current.Cache.Insert(key, value, aspNetCacheDependencies, absoluteExpiry.Value, relativeExpiry.Value);   } return value; }   private bool TryGetValue<U>(string key, out U value) { object cachedValue = HttpContext.Current.Cache.Get(key); if (cachedValue == null) { value = default(U); return false; } else { try { value = (U)cachedValue; return true; } catch { value = default(U); return false; } } }   #endregion } }   Wiring up the DI Container Now the implementations for the Cache Dependency are in place, I wired them up in the existing Windsor CacheInstaller. First I needed to register the implementation of the ISqlCacheDependency interface: container.Register( Component.For<ISqlCacheDependency>() .ImplementedBy<AspNetSqlCacheDependency>() .LifestyleTransient());   Next I registered the Cache Dependency Factory. Notice that I have not implemented the ICacheDependencyFactory interface. Castle Windsor will do this for me by using the Type Factory Facility. I do need to bring the Castle.Facilities.TypedFacility namespace into scope: using Castle.Facilities.TypedFactory;   Then I registered the factory: container.AddFacility<TypedFactoryFacility>();   container.Register( Component.For<ICacheDependencyFactory>() .AsFactory()); The full code for the CacheInstaller class is: using Castle.MicroKernel.Registration; using Castle.MicroKernel.SubSystems.Configuration; using Castle.Windsor; using Castle.Facilities.TypedFactory;   using CacheDiSample.Domain.CacheInterfaces; using CacheDiSample.CacheProviders;   namespace CacheDiSample.WindsorInstallers { public class CacheInstaller : IWindsorInstaller { public void Install(IWindsorContainer container, IConfigurationStore store) { container.Register( Component.For(typeof(ICacheProvider<>)) .ImplementedBy(typeof(CacheProvider<>)) .LifestyleTransient());   container.Register( Component.For<ISqlCacheDependency>() .ImplementedBy<AspNetSqlCacheDependency>() .LifestyleTransient());   container.AddFacility<TypedFactoryFacility>();   container.Register( Component.For<ICacheDependencyFactory>() .AsFactory()); } } }   Configuring the ASP.NET SQL Cache Dependency There are a couple of configuration steps required to enable SQL Cache Dependency for the application and database. From the Visual Studio Command Prompt, the following commands should be used to enable the Cache Polling of the relevant database tables: aspnet_regsql -S <servername> -E -d <databasename> –ed aspnet_regsql -S <servername> -E -d CacheSample –et –t <tablename>   (The –t option should be repeated for each table that is to be made available for cache dependencies). Finally the SQL Cache Polling needs to be enabled by adding the following configuration to the <system.web> section of web.config: <caching> <sqlCacheDependency pollTime="10000" enabled="true"> <databases> <add name="BloggingContext" connectionStringName="BloggingContext"/> </databases> </sqlCacheDependency> </caching>   (obviously the name and connection string name should be altered as required). Using a SQL Cache Dependency Now all the coding is complete. To specify a SQL Cache Dependency, I can modify my BlogRepositoryWithCaching decorator class (see the earlier post) as follows: public IList<Blog> GetAll() { var sqlCacheDependency = cacheProvider.CreateCacheDependency<ISqlCacheDependency>() .Initialise("BloggingContext", "Blogs");   ICacheDependency[] cacheDependencies = new ICacheDependency[] { sqlCacheDependency };   string key = string.Format("CacheDiSample.DataAccess.GetAll");   return cacheProvider.Fetch(key, () => { return parentBlogRepository.GetAll(); }, null, null, cacheDependencies) .ToList(); }   This will add a dependency of the “Blogs” table in the database. The data will remain in the cache until the contents of this table change, then the cache item will be invalidated, and the next call to the GetAll() repository method will be routed to the parent repository to refresh the data from the database.

    Read the article

  • Disk Search / Sort Algorithm

    - by AlgoMan
    Given a Range of numbers say 1 to 10,000, Input is in random order. Constraint: At any point only 1000 numbers can be loaded to memory. Assumption: Assuming unique numbers. I propose the following efficient , "When-Required-sort Algorithm". We write the numbers into files which are designated to hold particular range of numbers. For example, File1 will have 0 - 999 , File2 will have 1000 - 1999 and so on in random order. If a particular number which is say "2535" is being searched for then we know that the number is in the file3 (Binary search over range to find the file). Then file3 is loaded to memory and sorted using say Quick sort (which is optimized to add insertion sort when the array size is small ) and then we search the number in this sorted array using Binary search. And when search is done we write back the sorted file. So in long run all the numbers will be sorted. Please comment on this proposal.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >