Search Results

Search found 6909 results on 277 pages for 'filter branch'.

Page 107/277 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • MERGE Bug with Filtered Indexes

    - by Paul White
    A MERGE statement can fail, and incorrectly report a unique key violation when: The target table uses a unique filtered index; and No key column of the filtered index is updated; and A column from the filtering condition is updated; and Transient key violations are possible Example Tables Say we have two tables, one that is the target of a MERGE statement, and another that contains updates to be applied to the target.  The target table contains three columns, an integer primary key, a single character alternate key, and a status code column.  A filtered unique index exists on the alternate key, but is only enforced where the status code is ‘a’: CREATE TABLE #Target ( pk integer NOT NULL, ak character(1) NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) );   CREATE UNIQUE INDEX uq1 ON #Target (ak) INCLUDE (status_code) WHERE status_code = 'a'; The changes table contains just an integer primary key (to identify the target row to change) and the new status code: CREATE TABLE #Changes ( pk integer NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) ); Sample Data The sample data for the example is: INSERT #Target (pk, ak, status_code) VALUES (1, 'A', 'a'), (2, 'B', 'a'), (3, 'C', 'a'), (4, 'A', 'd');   INSERT #Changes (pk, status_code) VALUES (1, 'd'), (4, 'a');          Target                     Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ a           ¦    ¦  1 ¦ d           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ d           ¦ +-----------------------+ The target table’s alternate key (ak) column is unique, for rows where status_code = ‘a’.  Applying the changes to the target will change row 1 from status ‘a’ to status ‘d’, and row 4 from status ‘d’ to status ‘a’.  The result of applying all the changes will still satisfy the filtered unique index, because the ‘A’ in row 1 will be deleted from the index and the ‘A’ in row 4 will be added. Merge Test One Let’s now execute a MERGE statement to apply the changes: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; The MERGE changes the two target rows as expected.  The updated target table now contains: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦ <—changed from ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦ <—changed from ‘d’ +-----------------------+ Merge Test Two Now let’s repopulate the changes table to reverse the updates we just performed: TRUNCATE TABLE #Changes;   INSERT #Changes (pk, status_code) VALUES (1, 'a'), (4, 'd'); This will change row 1 back to status ‘a’ and row 4 back to status ‘d’.  As a reminder, the current state of the tables is:          Target                        Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ d           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ a           ¦ +-----------------------+ We execute the same MERGE statement: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; However this time we receive the following message: Msg 2601, Level 14, State 1, Line 1 Cannot insert duplicate key row in object 'dbo.#Target' with unique index 'uq1'. The duplicate key value is (A). The statement has been terminated. Applying the changes using UPDATE Let’s now rewrite the MERGE to use UPDATE instead: UPDATE t SET status_code = c.status_code FROM #Target AS t JOIN #Changes AS c ON t.pk = c.pk WHERE c.status_code <> t.status_code; This query succeeds where the MERGE failed.  The two rows are updated as expected: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ a           ¦ <—changed back to ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ d           ¦ <—changed back to ‘d’ +-----------------------+ What went wrong with the MERGE? In this test, the MERGE query execution happens to apply the changes in the order of the ‘pk’ column. In test one, this was not a problem: row 1 is removed from the unique filtered index by changing status_code from ‘a’ to ‘d’ before row 4 is added.  At no point does the table contain two rows where ak = ‘A’ and status_code = ‘a’. In test two, however, the first change was to change row 1 from status ‘d’ to status ‘a’.  This change means there would be two rows in the filtered unique index where ak = ‘A’ (both row 1 and row 4 meet the index filtering criteria ‘status_code = a’). The storage engine does not allow the query processor to violate a unique key (unless IGNORE_DUP_KEY is ON, but that is a different story, and doesn’t apply to MERGE in any case).  This strict rule applies regardless of the fact that if all changes were applied, there would be no unique key violation (row 4 would eventually be changed from ‘a’ to ‘d’, removing it from the filtered unique index, and resolving the key violation). Why it went wrong The query optimizer usually detects when this sort of temporary uniqueness violation could occur, and builds a plan that avoids the issue.  I wrote about this a couple of years ago in my post Beware Sneaky Reads with Unique Indexes (you can read more about the details on pages 495-497 of Microsoft SQL Server 2008 Internals or in Craig Freedman’s blog post on maintaining unique indexes).  To summarize though, the optimizer introduces Split, Filter, Sort, and Collapse operators into the query plan to: Split each row update into delete followed by an inserts Filter out rows that would not change the index (due to the filter on the index, or a non-updating update) Sort the resulting stream by index key, with deletes before inserts Collapse delete/insert pairs on the same index key back into an update The effect of all this is that only net changes are applied to an index (as one or more insert, update, and/or delete operations).  In this case, the net effect is a single update of the filtered unique index: changing the row for ak = ‘A’ from pk = 4 to pk = 1.  In case that is less than 100% clear, let’s look at the operation in test two again:          Target                     Changes                   Result +-----------------------+    +------------------+    +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦    ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦    ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ d           ¦    ¦  1 ¦ A  ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦    ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+    ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦                            ¦  4 ¦ A  ¦ d           ¦ +-----------------------+                            +-----------------------+ From the filtered index’s point of view (filtered for status_code = ‘a’ and shown in nonclustered index key order) the overall effect of the query is:   Before           After +---------+    +---------+ ¦ pk ¦ ak ¦    ¦ pk ¦ ak ¦ ¦----+----¦    ¦----+----¦ ¦  4 ¦ A  ¦    ¦  1 ¦ A  ¦ ¦  2 ¦ B  ¦    ¦  2 ¦ B  ¦ ¦  3 ¦ C  ¦    ¦  3 ¦ C  ¦ +---------+    +---------+ The single net change there is a change of pk from 4 to 1 for the nonclustered index entry ak = ‘A’.  This is the magic performed by the split, sort, and collapse.  Notice in particular how the original changes to the index key (on the ‘ak’ column) have been transformed into an update of a non-key column (pk is included in the nonclustered index).  By not updating any nonclustered index keys, we are guaranteed to avoid transient key violations. The Execution Plans The estimated MERGE execution plan that produces the incorrect key-violation error looks like this (click to enlarge in a new window): The successful UPDATE execution plan is (click to enlarge in a new window): The MERGE execution plan is a narrow (per-row) update.  The single Clustered Index Merge operator maintains both the clustered index and the filtered nonclustered index.  The UPDATE plan is a wide (per-index) update.  The clustered index is maintained first, then the Split, Filter, Sort, Collapse sequence is applied before the nonclustered index is separately maintained. There is always a wide update plan for any query that modifies the database. The narrow form is a performance optimization where the number of rows is expected to be relatively small, and is not available for all operations.  One of the operations that should disallow a narrow plan is maintaining a unique index where intermediate key violations could occur. Workarounds The MERGE can be made to work (producing a wide update plan with split, sort, and collapse) by: Adding all columns referenced in the filtered index’s WHERE clause to the index key (INCLUDE is not sufficient); or Executing the query with trace flag 8790 set e.g. OPTION (QUERYTRACEON 8790). Undocumented trace flag 8790 forces a wide update plan for any data-changing query (remember that a wide update plan is always possible).  Either change will produce a successfully-executing wide update plan for the MERGE that failed previously. Conclusion The optimizer fails to spot the possibility of transient unique key violations with MERGE under the conditions listed at the start of this post.  It incorrectly chooses a narrow plan for the MERGE, which cannot provide the protection of a split/sort/collapse sequence for the nonclustered index maintenance. The MERGE plan may fail at execution time depending on the order in which rows are processed, and the distribution of data in the database.  Worse, a previously solid MERGE query may suddenly start to fail unpredictably if a filtered unique index is added to the merge target table at any point. Connect bug filed here Tests performed on SQL Server 2012 SP1 CUI (build 11.0.3321) x64 Developer Edition © 2012 Paul White – All Rights Reserved Twitter: @SQL_Kiwi Email: [email protected]

    Read the article

  • CodePlex Daily Summary for Friday, November 11, 2011

    CodePlex Daily Summary for Friday, November 11, 2011Popular ReleasesComposite C1 CMS: Composite C1 3.0 RC3 (3.0.4332.33416): This is currently a Release Candidate. Upgrade guidelines and "what's new" are pending.Dynamic PagedCollection (Silverlight / WPF Pagination): PagedCollection: All classes which facilitate your pagination !Media Companion: MC 3.422b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) TV Show Resolutions... Made the TV Shows folder list sorted. Re-visibled 'Manually Add Path' in Root Folders. Sorted list to process during new tv episode search Rebuild Movies now processes thru folders alphabetically Fix for issue #208 - Display Missing Episodes is not popu...XPath Visualizer: XPathVisualizer v1.3 Latest: This is v1.3.0.6 of XpathVisualizer. This is an update release for v1.3. These workitems have been fixed since v1.3.0.5: 7429 7432 7427MSBuild Extension Pack: November 2011: Release Blog Post The MSBuild Extension Pack November 2011 release provides a collection of over 415 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GU...CODE Framework: 4.0.11110.0: Various minor fixes and tweaks.Extensions for Reactive Extensions (Rxx): Rxx 1.2: What's NewRelated Work Items Please read the latest release notes for details about what's new. Content SummaryRxx provides the following features. See the Documentation for details. Many IObservable<T> extension methods and IEnumerable<T> extension methods. Many useful types such as ViewModel, CommandSubject, ListSubject, DictionarySubject, ObservableDynamicObject, Either<TLeft, TRight>, Maybe<T> and others. Various interactive labs that illustrate the runtime behavior of the extensio...Player Framework by Microsoft: HTML5 Player Framework 1.0: Additional DownloadsHTML5 Player Framework Examples - This is a set of examples showing how to setup and initialize the HTML5 Player Framework. This includes examples of how to use the Player Framework with both the HTML5 video tag and Silverlight player. Note: Be sure to unblock the zip file before using. Note: In order to test Silverlight fallback in the included sample app, you need to run the html and xap files over http (e.g. over localhost). Silverlight Players - Visit the Silverlig...NewLife XCode ??????: XCode v8.2.2011.1107、XCoder v4.5.2011.1108: v8.2.2011.1107 ?IEntityOperate.Create?Entity.CreateInstance??????forEdit,????????(FindByKeyForEdit)???,???false ??????Entity.CreateInstance,????forEdit,???????????????????? v8.2.2011.1103 ??MS????,??MaxMin??(????????)、NotIn??(????)、?Top??(??NotIn)、RowNumber??(?????) v8.2.2011.1101 SqlServer?????????DataPath,?????????????????????? Oracle?????????DllPath,????OCI??,???????????ORACLE_HOME?? Oracle?????XCode.Oracle.IsUseOwner,???????????Ow...Facebook C# SDK: v5.3.2: This is a RTW release which adds new features and bug fixes to v5.2.1. Query/QueryAsync methods uses graph api instead of legacy rest api. removed dependency from Code Contracts enabled Task Parallel Support in .NET 4.0+ (experimental) added support for early preview for .NET 4.5 (binaries not distributed in codeplex nor nuget.org, will need to manually build from Facebook-Net45.sln) added additional method overloads for .NET 4.5 to support IProgress<T> for upload progress added ne...Delete Inactive TS Ports: List and delete the Inactive TS Ports: UPDATEAdded support for windows 2003 servers and removed some null reference errors when the registry key was not present List and delete the Inactive TS Ports - The InactiveTSPortList.EXE accepts command line arguments The InactiveTSPortList.Standalone.WithoutPrompt.exe runs as a standalone exe without the need for any command line arguments.ClosedXML - The easy way to OpenXML: ClosedXML 0.60.0: Added almost full support for auto filters (missing custom date filters). See examples Filter Values, Custom Filters Fixed issues 7016, 7391, 7388, 7389, 7198, 7196, 7194, 7186, 7067, 7115, 7144Microsoft Research Boogie: Nightly builds: This download category contains automatically released nightly builds, reflecting the current state of Boogie's development. We try to make sure each nightly build passes the test suite. If you suspect that was not the case, please try the previous nightly build to see if that really is the problem. Also, please see the installation instructions.GoogleMap Control: GoogleMap Control 6.0: Major design changes to the control in order to achieve better scalability and extensibility for the new features comming with GoogleMaps API. GoogleMap control switched to GoogleMaps API v3 and .NET 4.0. GoogleMap control is 100% ScriptControl now, it requires ScriptManager to be registered on the pages where and before it is used. Markers, polylines, polygons and directions were implemented as ExtenderControl, instead of being inner properties of GoogleMap control. Better perfomance. Better...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: V2.1: Version 2.1 (click on the right) this uses V4.0 of .net Version 2.1 adds the following features: (apologize if I forget some, added a lot of little things) Manual Lookup with TV or Movie (finally huh!), you can look up a movie or TV episode directly, you can right click on anythign, and choose manual lookup, then will allow you to type anything you want to look up and it will assign it to the file you right clicked. No Rename: a very popular request, this is an option you can set so that t...SubExtractor: Release 1020: Feature: added "baseline double quotes" character to selector box Feature: added option to save SRT files as ANSI (instead of previous UTF-8 only) Feature: made "Save Sup files to Source directory" apply to both Sup and Idx source files. Fix: removed SDH text (...) or [...] that is split over 2 lines Fix: better decision-making in when to prefix a line with a '-' because SDH was removedAcDown????? - Anime&Comic Downloader: AcDown????? v3.6.1: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6.1?? ??.hlv...Track Folder Changes: Track Folder Changes 1.1: Fixed exception when right-clicking the root nodeKinect Toolbox: Kinect Toolbox v1.1.0.2: This version adds support for the Kinect for Windows SDK beta 2.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.35: Fix issue #16850 - minifying jQuery 1.7 produced script error. Need to make sure that any in-operators that get inserted into a for-statement during minification get wrapped in parentheses so the syntax remains correct.New ProjectsAliyun Open Storage Service: Aliyun Open Storage Service .NET APIAutomating SQL Azure Backup using Worker role: This tool is used for backup functionality on SQL Azure database and tables in a periodical timeline. The code can deployed as a Worker role with Azure or on-premise environment and the backup file can store in blob storage or a file system. BindableApplicationBar: A bindable ApplicationBar control wrapper for Windows Phone that allows specifying and updating the ApplicationBar properties by changing the properties of a view model instead of handling events in the code behind.Cheekpad: Cheekpad is a web-based php platform designed to be mobile, light, and flexible, combining properties of many education CMSs.ClipoWeb: ClipoWeb is a web clipboard that allows you to copy text and files between computers. Users access a web page on the source and destination computers, and then the copy&paste between both pages, just like a clipboard.EntityShape: EntityShape makes it easier for Entity Framework Code First developers to efficiently eager load data across multiple tables. You'll no longer have to use Include, multiple queries or lazy loading to populate large entity graphs. It's developed in C#.Net 4.0. fhdbbv: Projekt an der HDU ehemals HS Deggendorf ehemals FH Deggendorf von B. B. V.Geography Services - Helping you convert WKB/WKT into JSON for Google Maps, etc.: This project allows you to easily convert some Well Known Binary or Well Known Text into a custom object for JSON ... to be shown on maps like Google Maps.GSISWebServiceWP7: This is a sample of a Windows Phone Client using the Greek GSIS Web Service at http://www.gsis.gr/wsnp.html (in Greek).Ignitron Daphne 2012 - program na hraní dámy: Ignitron Daphne 2012 je open source projektem, který slouží jako univerzálni platforma a program pro hraní dámy. Jedná se predevším o vyvíjenou desktopovou aplikaci pro hraní zatím ceské dámy a o plánovaný web server pro hru po internetu. Jádro softwaru je univerzální platforma, která umožnuje propojení desktopového sveta s webovým serverem, prípadné v budoucnu i s mobilními aplikacemi.ksuTweetNew: a simple twitter client developped in Java and the Particle SDK.Lucene Integration with SQL Server: Lucene Integration with SQL ServerNHarness: NHarness was primarily written to allow Visual Studio Express users, without access to plugins such as TestDriven.NET, to run their tests in the Visual Studio IDE. A simple RunTestsInClass<T> static method is called, and a detailed TestResult enumeration is returned. NHarness recognises the following NUnit attributes: TestFixtureAttribute TestFixtureSetUpAttribute TestFixtureTearDownAttribute SetUpAttribute TearDownAttribute TestAttribute ExpectedExceptionAttribute (as well as...PostgreSQL Client: A simple WPF postgreSQL client. The main goal is to provide an easy client for SQL commands.PostTwitt: Post Twitt for DNNProject64-Vanilla: A Project64 fork based on the 1.4 source code. This project is either to improve or provide VC++ 2010 converted source. Reservaai.me: Site de reservas de mesas online.sapiens.at.SharePoint List Filter Web Part: The sapiens.at.SharePoint List Filter Web Part for SharePoint 2010 provides you with a convenient way to quickly drill down, filter and find information stored in your SharePoint 2010 lists and document libraries. Token Replay Cache implementation for Windows Azure: There are two objects two download in this release: One is a functional base library for Azure Table, but I'm still ironing out the API. The second is an WIF Token Replay cache implementation that uses Azure Table. The benefits of this approach is that every token is verified and used once, and the token can never be replayed since the cache is infinitely large. This mitigates against the attack where the buffer is overwhelmed and the FIFO cache permits a replay of a valid / expired to...WarmUpService for WebApps: Those who have been programming for the Web must be familiar with sluggish response for the first ever hit to the server. This also happens whenever the IIS/AppPool gets restarted/recycled. The WarmUpService keeps your web targets warmed-up by hitting them periodically.Windows 8 Metro Frame: The Meilos MBS Windows 8 App Frame makes it easy to create simpley Windows 8 Apps on Windows 7.WorldWeatherOnline.com plug-in for HouseBot: This plug-in will use the worldweatheronline.com api and display in HouseBot automation software. Please note that it requires the c# wrapper to work found here: http://www.housebot.com/forums/viewtopic.php?f=4&t=856395XML AppSettings: This is a personal Class Library I wrote on an idea I found somewhere on the web once. If you derive any class from AppSettings, you can serialize all of it's public (protected) members into xml by using a single command.YazLab1RoyProject: YazLab1RoyProject

    Read the article

  • 12c - flashforward, flashback or see it as of now...

    - by noreply(at)blogger.com (Thomas Kyte)
    Oracle 9i exposed flashback query to developers for the first time.  The ability to flashback query dates back to version 4 however (it just wasn't exposed).  Every time you run a query in Oracle it is in fact a flashback query - it is what multi-versioning is all about.However, there was never a flashforward query (well, ok, the workspace manager has this capability - but with lots of extra baggage).  We've never been able to ask a table "what will you look like tomorrow" - but now we do.The capability is called Temporal Validity.  If you have a table with data that is effective dated - has a "start date" and "end date" column in it - we can now query it using flashback query like syntax.  The twist is - the date we "flashback" to can be in the future.  It works by rewriting the query to transparently the necessary where clause and filter out the right rows for the right period of time - and since you can have records whose start date is in the future - you can query a table and see what it would look like at some future time.Here is a quick example, we'll start with a table:ops$tkyte%ORA12CR1> create table addresses  2  ( empno       number,  3    addr_data   varchar2(30),  4    start_date  date,  5    end_date    date,  6    period for valid(start_date,end_date)  7  )  8  /Table created.the new bit is on line 6 (it can be altered into an existing table - so any table  you have with a start/end date column will be a candidate).  The keyword is PERIOD, valid is an identifier I chose - it could have been foobar, valid just sounds nice in the query later.  You identify the columns in your table - or we can create them for you if they don't exist.  Then you just create some data:ops$tkyte%ORA12CR1> insert into addresses (empno, addr_data, start_date, end_date )  2  values ( 1234, '123 Main Street', trunc(sysdate-5), trunc(sysdate-2) );1 row created.ops$tkyte%ORA12CR1>ops$tkyte%ORA12CR1> insert into addresses (empno, addr_data, start_date, end_date )  2  values ( 1234, '456 Fleet Street', trunc(sysdate-1), trunc(sysdate+1) );1 row created.ops$tkyte%ORA12CR1>ops$tkyte%ORA12CR1> insert into addresses (empno, addr_data, start_date, end_date )  2  values ( 1234, '789 1st Ave', trunc(sysdate+2), null );1 row created.and you can either see all of the data:ops$tkyte%ORA12CR1> select * from addresses;     EMPNO ADDR_DATA                      START_DAT END_DATE---------- ------------------------------ --------- ---------      1234 123 Main Street                27-JUN-13 30-JUN-13      1234 456 Fleet Street               01-JUL-13 03-JUL-13      1234 789 1st Ave                    04-JUL-13or query "as of" some point in time - as  you can see in the predicate section - it is just doing a query rewrite to automate the "where" filters:ops$tkyte%ORA12CR1> select * from addresses as of period for valid sysdate-3;     EMPNO ADDR_DATA                      START_DAT END_DATE---------- ------------------------------ --------- ---------      1234 123 Main Street                27-JUN-13 30-JUN-13ops$tkyte%ORA12CR1> @planops$tkyte%ORA12CR1> select * from table(dbms_xplan.display_cursor);PLAN_TABLE_OUTPUT-------------------------------------------------------------------------------SQL_ID  cthtvvm0dxvva, child number 0-------------------------------------select * from addresses as of period for valid sysdate-3Plan hash value: 3184888728-------------------------------------------------------------------------------| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |-------------------------------------------------------------------------------|   0 | SELECT STATEMENT  |           |       |       |     3 (100)|          ||*  1 |  TABLE ACCESS FULL| ADDRESSES |     1 |    48 |     3   (0)| 00:00:01 |-------------------------------------------------------------------------------Predicate Information (identified by operation id):---------------------------------------------------   1 - filter((("T"."START_DATE" IS NULL OR              "T"."START_DATE"<=SYSDATE@!-3) AND ("T"."END_DATE" IS NULL OR              "T"."END_DATE">SYSDATE@!-3)))Note-----   - dynamic statistics used: dynamic sampling (level=2)24 rows selected.ops$tkyte%ORA12CR1> select * from addresses as of period for valid sysdate;     EMPNO ADDR_DATA                      START_DAT END_DATE---------- ------------------------------ --------- ---------      1234 456 Fleet Street               01-JUL-13 03-JUL-13ops$tkyte%ORA12CR1> @planops$tkyte%ORA12CR1> select * from table(dbms_xplan.display_cursor);PLAN_TABLE_OUTPUT-------------------------------------------------------------------------------SQL_ID  26ubyhw9hgk7z, child number 0-------------------------------------select * from addresses as of period for valid sysdatePlan hash value: 3184888728-------------------------------------------------------------------------------| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |-------------------------------------------------------------------------------|   0 | SELECT STATEMENT  |           |       |       |     3 (100)|          ||*  1 |  TABLE ACCESS FULL| ADDRESSES |     1 |    48 |     3   (0)| 00:00:01 |-------------------------------------------------------------------------------Predicate Information (identified by operation id):---------------------------------------------------   1 - filter((("T"."START_DATE" IS NULL OR              "T"."START_DATE"<=SYSDATE@!) AND ("T"."END_DATE" IS NULL OR              "T"."END_DATE">SYSDATE@!)))Note-----   - dynamic statistics used: dynamic sampling (level=2)24 rows selected.ops$tkyte%ORA12CR1> select * from addresses as of period for valid sysdate+3;     EMPNO ADDR_DATA                      START_DAT END_DATE---------- ------------------------------ --------- ---------      1234 789 1st Ave                    04-JUL-13ops$tkyte%ORA12CR1> @planops$tkyte%ORA12CR1> select * from table(dbms_xplan.display_cursor);PLAN_TABLE_OUTPUT-------------------------------------------------------------------------------SQL_ID  36bq7shnhc888, child number 0-------------------------------------select * from addresses as of period for valid sysdate+3Plan hash value: 3184888728-------------------------------------------------------------------------------| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |-------------------------------------------------------------------------------|   0 | SELECT STATEMENT  |           |       |       |     3 (100)|          ||*  1 |  TABLE ACCESS FULL| ADDRESSES |     1 |    48 |     3   (0)| 00:00:01 |-------------------------------------------------------------------------------Predicate Information (identified by operation id):---------------------------------------------------   1 - filter((("T"."START_DATE" IS NULL OR              "T"."START_DATE"<=SYSDATE@!+3) AND ("T"."END_DATE" IS NULL OR              "T"."END_DATE">SYSDATE@!+3)))Note-----   - dynamic statistics used: dynamic sampling (level=2)24 rows selected.All in all a nice, easy way to query effective dated information as of a point in time without a complex where clause.  You need to maintain the data - it isn't that a delete will turn into an update the end dates a record or anything - but if you have tables with start/end dates, this will make it much easier to query them.

    Read the article

  • Rails3 and Paperclip

    - by arkannia
    Hi, I have migrated my application from rails 2.3 to rails3 and i have a problem with paperclip. I saw there was a branch for rails3 on paperclip git. So I added "gem 'paperclip', :git = 'git://github.com/thoughtbot/paperclip.git', :branch = 'rails3'" into the Gemfile and launch the command bundle install. Once paperclip installed, the upload worked fine but not the styles. I saw a hack to fix it. # in lib/paperclip/attachment.rb at line 293 def callback which #:nodoc: # replace this line... # instance.run_callbacks(which, @queued_for_write){|result,obj| result == false } # with this: instance.run_callbacks(which, @queued_for_write) end The styles are ok after that, but i'm not able to active the processor. My code is : has_attached_file :image, :default_url => "/images/nopicture.jpg", :styles => { :large => "800x600>", :cropped => Proc.new { |instance| "#{instance.width}x#{instance.height}>" }, :crop => "300x300>" }, :processors => [:cropper] My processor is located in RAILS_APP/lib/paperclip_processors/cropper.rb and contains : module Paperclip class Cropper < Thumbnail def transformation_command if crop_command and !skip_crop? crop_command + super.sub(/ -crop \S+/, '') else super end end def crop_command target = @attachment.instance trans = ""; trans << " -crop #{target.crop_w}x#{target.crop_h}+#{target.crop_x}+#{target.crop_y}" if target.cropping? trans << " -resize \"#{target.width}x#{target.height}\"" trans end def skip_crop? ["800x600>", "300x300>"].include?(@target_geometry.to_s) end end end My problem is that i got this error message : uninitialized constant Paperclip::Cropper The cropped processor is not loaded. Is anybody has an idea to fix that ? For information my application works fine on rails 2.3.4.

    Read the article

  • Replacing instructions in a method's MethodBody

    - by Alix
    Hi, (First of all, this is a very lengthy post, but don't worry: I've already implemented all of it, I'm just asking your opinion.) I'm having trouble implementing the following; I'd appreciate some help: I get a Type as parameter. I define a subclass using reflection. Notice that I don't intend to modify the original type, but create a new one. I create a property per field of the original class, like so: public class OriginalClass { private int x; } public class Subclass : OriginalClass { private int x; public int X { get { return x; } set { x = value; } } } For every method of the superclass, I create an analogous method in the subclass. The method's body must be the same except that I replace the instructions ldfld x with callvirt this.get_X, that is, instead of reading from the field directly I call the get accessor. I'm having trouble with step 4. I know you're not supposed to manipulate code like this, but I really need to. Here's what I've tried: Attempt #1: Use Mono.Cecil. This would allow me to parse the body of the method into human-readable Instructions, and easily replace instructions. However, the original type isn't in a .dll file, so I can't find a way to load it with Mono.Cecil. Writing the type to a .dll, then load it, then modify it and write the new type to disk (which I think is the way you create a type with Mono.Cecil), and then load it seems like a huge overhead. Attempt #2: Use Mono.Reflection. This would also allow me to parse the body into Instructions, but then I have no support for replacing instructions. I've implemented a very ugly and inefficient solution using Mono.Reflection, but it doesn't yet support methods that contain try-catch statements (although I guess I can implement this) and I'm concerned that there may be other scenarios in which it won't work, since I'm using the ILGenerator in a somewhat unusual way. Also, it's very ugly ;). Here's what I've done: private void TransformMethod(MethodInfo methodInfo) { // Create a method with the same signature. ParameterInfo[] paramList = methodInfo.GetParameters(); Type[] args = new Type[paramList.Length]; for (int i = 0; i < args.Length; i++) { args[i] = paramList[i].ParameterType; } MethodBuilder methodBuilder = typeBuilder.DefineMethod( methodInfo.Name, methodInfo.Attributes, methodInfo.ReturnType, args); ILGenerator ilGen = methodBuilder.GetILGenerator(); // Declare the same local variables as in the original method. IList<LocalVariableInfo> locals = methodInfo.GetMethodBody().LocalVariables; foreach (LocalVariableInfo local in locals) { ilGen.DeclareLocal(local.LocalType); } // Get readable instructions. IList<Instruction> instructions = methodInfo.GetInstructions(); // I first need to define labels for every instruction in case I // later find a jump to that instruction. Once the instruction has // been emitted I cannot label it, so I'll need to do it in advance. // Since I'm doing a first pass on the method's body anyway, I could // instead just create labels where they are truly needed, but for // now I'm using this quick fix. Dictionary<int, Label> labels = new Dictionary<int, Label>(); foreach (Instruction instr in instructions) { labels[instr.Offset] = ilGen.DefineLabel(); } foreach (Instruction instr in instructions) { // Mark this instruction with a label, in case there's a branch // instruction that jumps here. ilGen.MarkLabel(labels[instr.Offset]); // If this is the instruction that I want to replace (ldfld x)... if (instr.OpCode == OpCodes.Ldfld) { // ...get the get accessor for the accessed field (get_X()) // (I have the accessors in a dictionary; this isn't relevant), MethodInfo safeReadAccessor = dataMembersSafeAccessors[((FieldInfo) instr.Operand).Name][0]; // ...instead of emitting the original instruction (ldfld x), // emit a call to the get accessor, ilGen.Emit(OpCodes.Callvirt, safeReadAccessor); // Else (it's any other instruction), reemit the instruction, unaltered. } else { Reemit(instr, ilGen, labels); } } } And here comes the horrible, horrible Reemit method: private void Reemit(Instruction instr, ILGenerator ilGen, Dictionary<int, Label> labels) { // If the instruction doesn't have an operand, emit the opcode and return. if (instr.Operand == null) { ilGen.Emit(instr.OpCode); return; } // Else (it has an operand)... // If it's a branch instruction, retrieve the corresponding label (to // which we want to jump), emit the instruction and return. if (instr.OpCode.FlowControl == FlowControl.Branch) { ilGen.Emit(instr.OpCode, labels[Int32.Parse(instr.Operand.ToString())]); return; } // Otherwise, simply emit the instruction. I need to use the right // Emit call, so I need to cast the operand to its type. Type operandType = instr.Operand.GetType(); if (typeof(byte).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (byte) instr.Operand); else if (typeof(double).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (double) instr.Operand); else if (typeof(float).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (float) instr.Operand); else if (typeof(int).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (int) instr.Operand); ... // you get the idea. This is a pretty long method, all like this. } Branch instructions are a special case because instr.Operand is SByte, but Emit expects an operand of type Label. Hence the need for the Dictionary labels. As you can see, this is pretty horrible. What's more, it doesn't work in all cases, for instance with methods that contain try-catch statements, since I haven't emitted them using methods BeginExceptionBlock, BeginCatchBlock, etc, of ILGenerator. This is getting complicated. I guess I can do it: MethodBody has a list of ExceptionHandlingClause that should contain the necessary information to do this. But I don't like this solution anyway, so I'll save this as a last-resort solution. Attempt #3: Go bare-back and just copy the byte array returned by MethodBody.GetILAsByteArray(), since I only want to replace a single instruction for another single instruction of the same size that produces the exact same result: it loads the same type of object on the stack, etc. So there won't be any labels shifting and everything should work exactly the same. I've done this, replacing specific bytes of the array and then calling MethodBuilder.CreateMethodBody(byte[], int), but I still get the same error with exceptions, and I still need to declare the local variables or I'll get an error... even when I simply copy the method's body and don't change anything. So this is more efficient but I still have to take care of the exceptions, etc. Sigh. Here's the implementation of attempt #3, in case anyone is interested: private void TransformMethod(MethodInfo methodInfo, Dictionary<string, MethodInfo[]> dataMembersSafeAccessors, ModuleBuilder moduleBuilder) { ParameterInfo[] paramList = methodInfo.GetParameters(); Type[] args = new Type[paramList.Length]; for (int i = 0; i < args.Length; i++) { args[i] = paramList[i].ParameterType; } MethodBuilder methodBuilder = typeBuilder.DefineMethod( methodInfo.Name, methodInfo.Attributes, methodInfo.ReturnType, args); ILGenerator ilGen = methodBuilder.GetILGenerator(); IList<LocalVariableInfo> locals = methodInfo.GetMethodBody().LocalVariables; foreach (LocalVariableInfo local in locals) { ilGen.DeclareLocal(local.LocalType); } byte[] rawInstructions = methodInfo.GetMethodBody().GetILAsByteArray(); IList<Instruction> instructions = methodInfo.GetInstructions(); int k = 0; foreach (Instruction instr in instructions) { if (instr.OpCode == OpCodes.Ldfld) { MethodInfo safeReadAccessor = dataMembersSafeAccessors[((FieldInfo) instr.Operand).Name][0]; // Copy the opcode: Callvirt. byte[] bytes = toByteArray(OpCodes.Callvirt.Value); for (int m = 0; m < OpCodes.Callvirt.Size; m++) { rawInstructions[k++] = bytes[put.Length - 1 - m]; } // Copy the operand: the accessor's metadata token. bytes = toByteArray(moduleBuilder.GetMethodToken(safeReadAccessor).Token); for (int m = instr.Size - OpCodes.Ldfld.Size - 1; m >= 0; m--) { rawInstructions[k++] = bytes[m]; } // Skip this instruction (do not replace it). } else { k += instr.Size; } } methodBuilder.CreateMethodBody(rawInstructions, rawInstructions.Length); } private static byte[] toByteArray(int intValue) { byte[] intBytes = BitConverter.GetBytes(intValue); if (BitConverter.IsLittleEndian) Array.Reverse(intBytes); return intBytes; } private static byte[] toByteArray(short shortValue) { byte[] intBytes = BitConverter.GetBytes(shortValue); if (BitConverter.IsLittleEndian) Array.Reverse(intBytes); return intBytes; } (I know it isn't pretty. Sorry. I put it quickly together to see if it would work.) I don't have much hope, but can anyone suggest anything better than this? Sorry about the extremely lengthy post, and thanks.

    Read the article

  • Is there a perfect algorithm for chess?

    - by Overflown
    Dear Stack Overflow community, I was recently in a discussion with a non-coder person on the possibilities of chess computers. I'm not well versed in theory, but think I know enough. I argued that there could not exist a deterministic Turing machine that always won or stalemated at chess. I think that, even if you search the entire space of all combinations of player1/2 moves, the single move that the computer decides upon at each step is based on a heuristic. Being based on a heuristic, it does not necessarily beat ALL of the moves that the opponent could do. My friend thought, to the contrary, that a computer would always win or tie if it never made a "mistake" move (however do you define that?). However, being a programmer who has taken CS, I know that even your good choices - given a wise opponent - can force you to make "mistake" moves in the end. Even if you know everything, your next move is greedy in matching a heuristic. Most chess computers try to match a possible end game to the game in progress, which is essentially a dynamic programming traceback. Again, the endgame in question is avoidable though. -- thanks, Allan Edit: Hmm... looks like I ruffled some feathers here. That's good. Thinking about it again, it seems like there is no theoretical problem with solving a finite game like chess. I would argue that chess is a bit more complicated than checkers in that a win is not necessarily by numerical exhaustion of pieces, but by a mate. My original assertion is probably wrong, but then again I think I've pointed out something that is not yet satisfactorily proven (formally). I guess my thought experiment was that whenever a branch in the tree is taken, then the algorithm (or memorized paths) must find a path to a mate (without getting mated) for any possible branch on the opponent moves. After the discussion, I will buy that given more memory than we can possibly dream of, all these paths could be found.

    Read the article

  • Need advice or pointers on Release Management Strategies

    - by Murray
    I look after an internal web based (Java, JSP, Mediasurface, etc.) system that is in constant use (24/5). Users raise tickets for enhancements, bug fixes and other business changes. These issues are signed off individually and assigned to one of three or four developers. Once the issue is complete it is built and the code only committed to SVN. The changed files (templates, html, classes, jsp) are then copied to a dev server and committed to a different repository from where they are checked out to the UAT server for testing. (this often requires the Tomcat service to be restarted and occasionally the Mediasurface service as well). The users then test and either reject or approve the release. If approved the edited files are checked out to the Live server and the same process as with UAT undertaken. If rejected the developer makes the relevant changes and starts the release process again. This is all done manually without much control. Where different developers are working on similar files, changes sometimes get overwritten by builds done on out of sync code in other cases changes in UAT are moved to live in error as they are mixed up in files associated with a signed off release. I would like to move this to a more controlled and automated process where all source code and output files are held in SVN and releases to Dev, UAT and Live managed by a CI system (We have TeamCity in house for our .NET applications). My question is on how to manage the releases of multiple changes where some will be signed off and moved on and others rejected and returned to the developer. The changes may be on overlapping files and simply merging each release in to a Release Branch means that the rejected changes would have to be backed out of the branch. Is there a way to manage this using SVN and CI or will I simply have to live with the current system.

    Read the article

  • Updating Workflow Task without Correlation Token

    - by Khurram Aziz
    I have inhertied a sequential sharepoint workflow which deals with multiple tasks for different people; multi step approval based on certain condition..For each approval new task is created and monitored...For some reason; we have decided to use single task for the whole workflow and the single task will get assigned to required person at different stages...this will help us reduce the cluter in the task list For refactoring it; I am trying to create "CreateOrUpdateTaskAndWaitForCompletition" activity...so that I can use this component multiple times as per given workflow. Create/Wait branch of my activity works fine; as I have the correlation token within the activity. But when I try two instances of this activity; task is created in first activity and it needs to be updated in second instance where I dont have the correlation token. In the second instance; (Update/Wait branch) I have tried updating the task through code activity but its not working and I am getting "This task is currently locked by a running workflow and cannot be edited" exception! Can I use UpdateTask activity without correlation token? Can I programmatically update the workflow task? Can I programmatically unlock the workflow task?

    Read the article

  • DynamicJasper font encoding problem

    - by user266956
    Hello All, I'm new to DynamicJasper, I think it's a great project but for a few days I cannot run simple example which will display polish fonts properly. Details sections is generated without any problems automatically displaying polish letters like 'c z z a' etc. where title and column headers contain strange letters instead. I was trying to do the following, but it doesn't work: FastReportBuilder drb = new FastReportBuilder(); Font font = new Font(25, "SansSerif", "Helvetica", Font.PDF_ENCODING_CP1257_Baltic, true); Style titleStyle = new StyleBuilder(false).setFont(font).build(); DynamicReport dr = drb.addColumn("State", "state", String.class.getName(),30) .addColumn("Branch", "branch", String.class.getName(),30) .addColumn("Product Line", "productLine", String.class.getName(),50) .addGroups(2) .setTitle("November 2008 Zrebie aczc!") .setTitleStyle(titleStyle) .setSubtitle("This report was generated at " + new Date()) .setPrintBackgroundOnOddRows(true) .setUseFullPageWidth(true) .build(); JRDataSource ds = new JRBeanCollectionDataSource(this.simples); JasperPrint jp = DynamicJasperHelper.generateJasperPrint(dr, new ClassicLayoutManager(), ds); JasperViewer.viewReport(jp); //finally display the report report Any help would be highly appreciated as dynamicjasper forums seems to be dead and constantly spammed. Thanks in advance! Kris

    Read the article

  • How does Visual Studio's source control integration work with Perforce?

    - by Weeble
    We're using Perforce and Visual Studio. Whenever we create a branch, some projects will not be bound to source control unless we use "Open from Source Control", but other projects work regardless. From my investigations, I know some of the things involved: In our .csproj files, there are these settings: <SccProjectName <SccLocalPath <SccAuxPath <SccProvider Sometimes they are all set to "SAK", sometimes not. It seems things are more likely to work if these say "SAK". In our .sln file, there are settings for many of the projects: SccLocalPath# SccProjectFilePathRelativizedFromConnection# SccProjectUniqueName# (The # is a number that identifies each project.) SccLocalPath is a path relative to the solution file. Often it is ".", sometimes it is the folder that the project is in, and sometimes it is ".." or "..\..", and it seems to be bad for it to point to a folder above the solution folder. The relativized one is a path from that folder to the project file. It will be missing entirely if SccLocalPath points to the project's folder. If the SccLocalPath has ".." in it, this path might include folder names that are not the same between branches, which I think causes problems. So, to finally get to the specifics I'd like to know: What happens when you do "Change source control" and bind projects? How does Visual Studio decide what to put in the project and solution files? What happens when you do "Open from source control"? What's this "connection" folder that SccLocalPath and SccProjectFilePathRelativizedFromConnection refer to? How does Visual Studio/Perforce pick it? Is there some recommended way to make the source control bindings continue to work even when you create a new branch of the solution?

    Read the article

  • In Mercurial, what is the exact step that Peter or me has to do so that he gets back the rolled back

    - by Jian Lin
    The short question is: if I hg rollback, how does Peter get my rolled back version if he cloned from me? What are the exact steps he or me has to do or type? This is related to http://stackoverflow.com/questions/3034793/in-mercurial-when-peter-hg-clone-me-and-i-commit-and-he-pull-and-update-he-g The details: After the following steps, Mary has 7 and Peter has 11. My repository is 7 What are the exact steps Peter or me has to do or type SO THAT PETER GETS 7 back? F:\>mkdir hgme F:\>cd hgme F:\hgme>hg init F:\hgme>echo the code is 7 > code.txt F:\hgme>hg add code.txt F:\hgme>hg commit -m "this is version 1" F:\hgme>cd .. F:\>hg clone hgme hgpeter updating to branch default 1 files updated, 0 files merged, 0 files removed, 0 files unresolved F:\>cd hgpeter F:\hgpeter>type code.txt the code is 7 F:\hgpeter>cd .. F:\>cd hgme F:\hgme>notepad code.txt [now i change 7 to 11] F:\hgme>hg commit -m "this is version 2" F:\hgme>cd .. F:\>cd hgpeter F:\hgpeter>hg pull pulling from f:\hgme searching for changes adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files (run 'hg update' to get a working copy) F:\hgpeter>hg update 1 files updated, 0 files merged, 0 files removed, 0 files unresolved F:\hgpeter>type code.txt the code is 11 F:\hgpeter>cd .. F:\>cd hgme F:\hgme>hg rollback rolling back last transaction F:\hgme>cd .. F:\>hg clone hgme hgmary updating to branch default 1 files updated, 0 files merged, 0 files removed, 0 files unresolved F:\>cd hgmary F:\hgmary>type code.txt the code is 7 F:\hgmary>cd .. F:\>cd hgpeter F:\hgpeter>hg pull pulling from f:\hgme searching for changes no changes found F:\hgpeter>hg update 0 files updated, 0 files merged, 0 files removed, 0 files unresolved F:\hgpeter>type code.txt the code is 11 F:\hgpeter>

    Read the article

  • Git branching / rebasing good practices

    - by Pawel Krupinski
    I have a following scenario: 3 branches: - Master - MyBranch branched off Master for the purpose of developing a new feature of the system - MyBranchLocal branched off MyBranch as my local copy of the branch MyBranch is being rebased against and pushed to by other developers (who are working on the same feature as I am). As the owner of the MyBranch branch I want to keep it in sync with Master by rebasing. I also need to merge the changes I make to MyBranchLocal with MyBranch. What is a good way to do that? Couple of possible scenarios I tried so far: I. 1. Commit change to MyBranchLocal 2. Rebase MyBranch against Master 3. Rebase MyBranchLocal against MyBranch 4. Merge MyBranch with MyBranchLocal II. 1. Commit change to MyBranchLocal 2. Merge MyBranch with MyBranchLocal 3. Rebase MyBranch against Master 4. Rebase MyBranchLocal against MyBranch III. 1. Commit change to MyBranchLocal 2. Rebase MyBranch against Master 3. Merge MyBranch with MyBranchLocal 4. Rebase MyBranchLocal against MyBranch I already know that scenario III seems to be messing the commit history up a lot, potentially duplicating commits. What is your experience? What scenarios do you recommend?

    Read the article

  • How to get local bzr commits to server?

    - by C.W.Holeman II
    lanchpad.net states that for project Emle - Electronic Mathematics Laboratory Equipment 2.0 series is the current focus of development This is what I have done so far: Set the launchpad.net project to import from the sourceforge.net project Emle (this actually set the launchpad.net project to mirror the sourceforge.net project rather than just inport the content once) Examined the launchpad.net project to see that the three commits (#1 - #3) which were done in the sourceorge.net project previousley made it into launchpad.net. Used bzrto get the launchpad.net project which I did while is was still set for mirroring. Made three changes and commits using bzr (#4 - #6). Was unable to see the changes on the launchpad.net site. Requested the mirroring to be stopped (it did). Here is an extract from lanchpad.net for project Emle 2.0 series showing that launchpad.net has #1 - #3: Code for this series The following branch has been registered as the mainline branch for this release series: lp:emle - C.W.Holeman II 3 revisions, 3 in the past month. Here on my system showing changes #1 - #6: $ bzr log --line 6: C.W.Holeman II 2010-02-27 #528096 Corrected setting of paramter value for emleDi... 5: C.W.Holeman II 2010-02-27 Minor refactor - improved comment regarding workaround... 4: C.W.Holeman II 2010-02-27 #529089 #529087 Index file html tag lang attribute cor... 3: cwhii 2010-02-25 {Emle 2.4-5 (BL0011)} New top levels: trunk and tags. 2: cwhii 2010-02-25 New top levels: trunk and tags. 1: cwhii 2010-02-25 New top level trunk and tags. How do I get the changes that are in bzr on my system to apply to launchpad.net?

    Read the article

  • Why does my Workflow Service (4.0) variable go null in a DoWhile Activity?

    - by jlafay
    I have a WF service that I'm trying to setup receive activities to "Subscribe" and "Unsubscribe". I'm using This WF Durable Duplex Tutorial as a basis because my service performs callbacks to clients. Basically, think of it as a chat service. I can make client calls to the two receive activities just fine. What happens is the callback address of the client is passed in to Subscribe() on the service. The address is stored as a variable in the WF service and everything looks like it would work as to be expected. When a client calls Unsubscribe(), my watch I have set on the address var during debugging shows it as null. So what gives? Here's the basic setup of my WF service layout... Everything is enveloped in a DoWhile activity. Inside of that is a Pick activity and two Pick branches. The first branch is for subscribing activities. It has a receive-sendreply activity that assigns the string passed by the client to the WF address var. The second branch handles unsubscribing. The trigger is the Request activity and the client address is again passed in. From there it goes into a sequence, starting with an If. It checks to see if the unsubscribeAddress equals the address already subscribed. If it does, then it sets the address to String.Empty and sends a success message back to the client. Why would a variable that's scoped to the enveloping DoWhile activity be implicitly assigned to null? I'm trying to get this to work so I can implement multiple client subscribers from there and work on triggers that invoke callbacks to multiple clients. CONCAT EDIT: I set a breakpoint at the DoWhile level and my var is null once Unsubscribe() is called. When Subscribe() is invoked, the watch shows a value in the var all the way through. Until I Unsubscribe() with a client. Should I be using a While Activity instead?

    Read the article

  • Inserting instructions into method.

    - by Alix
    Hi, (First of all, this is a very lengthy post, but don't worry: I've already implemented all of it, I'm just asking your opinion.) I'm having trouble implementing the following; I'd appreciate some help: I get a Type as parameter. I define a subclass using reflection. Notice that I don't intend to modify the original type, but create a new one. I create a property per field of the original class, like so: [- ignore this text here; I had to add something or the formatting wouldn't work <-] public class OriginalClass { private int x; } public class Subclass : OriginalClass { private int x; public int X { get { return x; } set { x = value; } } } [This is number 4! Numbered lists don't work if you add code in between; sorry] For every method of the superclass, I create an analogous method in the subclass. The method's body must be the same except that I replace the instructions ldfld x with callvirt this.get_X, that is, instead of reading from the field directly I call the get accessor. I'm having trouble with step 4. I know you're not supposed to manipulate code like this, but I really need to. Here's what I've tried: Attempt #1: Use Mono.Cecil. This would allow me to parse the body of the method into human-readable Instructions, and easily replace instructions. However, the original type isn't in a .dll file, so I can't find a way to load it with Mono.Cecil. Writing the type to a .dll, then load it, then modify it and write the new type to disk (which I think is the way you create a type with Mono.Cecil), and then load it seems like a huge overhead. Attempt #2: Use Mono.Reflection. This would also allow me to parse the body into Instructions, but then I have no support for replacing instructions. I've implemented a very ugly and inefficient solution using Mono.Reflection, but it doesn't yet support methods that contain try-catch statements (although I guess I can implement this) and I'm concerned that there may be other scenarios in which it won't work, since I'm using the ILGenerator in a somewhat unusual way. Also, it's very ugly ;). Here's what I've done: private void TransformMethod(MethodInfo methodInfo) { // Create a method with the same signature. ParameterInfo[] paramList = methodInfo.GetParameters(); Type[] args = new Type[paramList.Length]; for (int i = 0; i < args.Length; i++) { args[i] = paramList[i].ParameterType; } MethodBuilder methodBuilder = typeBuilder.DefineMethod( methodInfo.Name, methodInfo.Attributes, methodInfo.ReturnType, args); ILGenerator ilGen = methodBuilder.GetILGenerator(); // Declare the same local variables as in the original method. IList<LocalVariableInfo> locals = methodInfo.GetMethodBody().LocalVariables; foreach (LocalVariableInfo local in locals) { ilGen.DeclareLocal(local.LocalType); } // Get readable instructions. IList<Instruction> instructions = methodInfo.GetInstructions(); // I first need to define labels for every instruction in case I // later find a jump to that instruction. Once the instruction has // been emitted I cannot label it, so I'll need to do it in advance. // Since I'm doing a first pass on the method's body anyway, I could // instead just create labels where they are truly needed, but for // now I'm using this quick fix. Dictionary<int, Label> labels = new Dictionary<int, Label>(); foreach (Instruction instr in instructions) { labels[instr.Offset] = ilGen.DefineLabel(); } foreach (Instruction instr in instructions) { // Mark this instruction with a label, in case there's a branch // instruction that jumps here. ilGen.MarkLabel(labels[instr.Offset]); // If this is the instruction that I want to replace (ldfld x)... if (instr.OpCode == OpCodes.Ldfld) { // ...get the get accessor for the accessed field (get_X()) // (I have the accessors in a dictionary; this isn't relevant), MethodInfo safeReadAccessor = dataMembersSafeAccessors[((FieldInfo) instr.Operand).Name][0]; // ...instead of emitting the original instruction (ldfld x), // emit a call to the get accessor, ilGen.Emit(OpCodes.Callvirt, safeReadAccessor); // Else (it's any other instruction), reemit the instruction, unaltered. } else { Reemit(instr, ilGen, labels); } } } And here comes the horrible, horrible Reemit method: private void Reemit(Instruction instr, ILGenerator ilGen, Dictionary<int, Label> labels) { // If the instruction doesn't have an operand, emit the opcode and return. if (instr.Operand == null) { ilGen.Emit(instr.OpCode); return; } // Else (it has an operand)... // If it's a branch instruction, retrieve the corresponding label (to // which we want to jump), emit the instruction and return. if (instr.OpCode.FlowControl == FlowControl.Branch) { ilGen.Emit(instr.OpCode, labels[Int32.Parse(instr.Operand.ToString())]); return; } // Otherwise, simply emit the instruction. I need to use the right // Emit call, so I need to cast the operand to its type. Type operandType = instr.Operand.GetType(); if (typeof(byte).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (byte) instr.Operand); else if (typeof(double).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (double) instr.Operand); else if (typeof(float).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (float) instr.Operand); else if (typeof(int).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (int) instr.Operand); ... // you get the idea. This is a pretty long method, all like this. } Branch instructions are a special case because instr.Operand is SByte, but Emit expects an operand of type Label. Hence the need for the Dictionary labels. As you can see, this is pretty horrible. What's more, it doesn't work in all cases, for instance with methods that contain try-catch statements, since I haven't emitted them using methods BeginExceptionBlock, BeginCatchBlock, etc, of ILGenerator. This is getting complicated. I guess I can do it: MethodBody has a list of ExceptionHandlingClause that should contain the necessary information to do this. But I don't like this solution anyway, so I'll save this as a last-resort solution. Attempt #3: Go bare-back and just copy the byte array returned by MethodBody.GetILAsByteArray(), since I only want to replace a single instruction for another single instruction of the same size that produces the exact same result: it loads the same type of object on the stack, etc. So there won't be any labels shifting and everything should work exactly the same. I've done this, replacing specific bytes of the array and then calling MethodBuilder.CreateMethodBody(byte[], int), but I still get the same error with exceptions, and I still need to declare the local variables or I'll get an error... even when I simply copy the method's body and don't change anything. So this is more efficient but I still have to take care of the exceptions, etc. Sigh. Here's the implementation of attempt #3, in case anyone is interested: private void TransformMethod(MethodInfo methodInfo, Dictionary<string, MethodInfo[]> dataMembersSafeAccessors, ModuleBuilder moduleBuilder) { ParameterInfo[] paramList = methodInfo.GetParameters(); Type[] args = new Type[paramList.Length]; for (int i = 0; i < args.Length; i++) { args[i] = paramList[i].ParameterType; } MethodBuilder methodBuilder = typeBuilder.DefineMethod( methodInfo.Name, methodInfo.Attributes, methodInfo.ReturnType, args); ILGenerator ilGen = methodBuilder.GetILGenerator(); IList<LocalVariableInfo> locals = methodInfo.GetMethodBody().LocalVariables; foreach (LocalVariableInfo local in locals) { ilGen.DeclareLocal(local.LocalType); } byte[] rawInstructions = methodInfo.GetMethodBody().GetILAsByteArray(); IList<Instruction> instructions = methodInfo.GetInstructions(); int k = 0; foreach (Instruction instr in instructions) { if (instr.OpCode == OpCodes.Ldfld) { MethodInfo safeReadAccessor = dataMembersSafeAccessors[((FieldInfo) instr.Operand).Name][0]; byte[] bytes = toByteArray(OpCodes.Callvirt.Value); for (int m = 0; m < OpCodes.Callvirt.Size; m++) { rawInstructions[k++] = bytes[put.Length - 1 - m]; } bytes = toByteArray(moduleBuilder.GetMethodToken(safeReadAccessor).Token); for (int m = instr.Size - OpCodes.Ldfld.Size - 1; m >= 0; m--) { rawInstructions[k++] = bytes[m]; } } else { k += instr.Size; } } methodBuilder.CreateMethodBody(rawInstructions, rawInstructions.Length); } private static byte[] toByteArray(int intValue) { byte[] intBytes = BitConverter.GetBytes(intValue); if (BitConverter.IsLittleEndian) Array.Reverse(intBytes); return intBytes; } private static byte[] toByteArray(short shortValue) { byte[] intBytes = BitConverter.GetBytes(shortValue); if (BitConverter.IsLittleEndian) Array.Reverse(intBytes); return intBytes; } (I know it isn't pretty. Sorry. I put it quickly together to see if it would work.) I don't have much hope, but can anyone suggest anything better than this? Sorry about the extremely lengthy post, and thanks.

    Read the article

  • How should I setup my Visual Studio projects/solutions in a Mercurial repository?

    - by Dave A
    At my company we have a few different web apps that each share some common libraries. The Visual Studio setup looks like this. Website 1 Solution Website 1 Shared Library 1 Project Shared Library 2 Project Website 2 Solution Website 2 Shared Library 1 Project Shared Library 2 Project Windows Service Solution Windows Service Project Shared Library 1 Project Shared Library 2 Project Shared Library Solution Shared Library 1 Project Shared Library 2 Project All Projects Solution Website 1 Website 2 Windows Service Project Shared Library 1 Project Shared Library 2 Project We want to start using Mercurial for source control, but I'm still not sure the best way to do it. From what I've read you're supposed to use a separate repository for each project. No problem there, but where do the Visual Studio solution files (.sln) go? Should there be a separate repository with just an .sln file? Ideally the projects that use the shared libraries should all use the same version, and the solution "All Projects Solution" should build without errors, but sometimes we need to branch the shared libraries. What is the best way to do this, and how would the repositories be setup? How do I get a working copy of a certain branch/tag of the Website 1 solution when every project is in a separate repository. Do I have to pull each one separately, or write a script to do it all at once? Can tortoise hg do that for me? Any other tips to make this process easier?

    Read the article

  • Pushing to bare Git repository (remote) causes it to stop being bare

    - by NSD
    I have a local repository called TestRepo. I clone it with the --bare option, zip this clone up, and throw it on my server. Unzip it, and it's still bare. I then clone the bare remote repository locally over ssh with something like git clone ssh://[email protected]/~/TestRepo.git TestRepoCloned The local TestRepoCloned is not bare and has a remote called "origin." It appears to be tracking correctly from the looks of its config file [core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true ignorecase = true [remote "origin"] fetch = +refs/heads/*:refs/remotes/origin/* url = ssh://[email protected]/~/TestRepo.git [branch "master"] remote = origin merge = refs/heads/master I edit an existing file. I commit the change to the current branch (master) via git commit -a -m "Edited a file." The commit succeeds and all is well. I decide to push this change to the remote repository via SSH with a git push The remote repository is now no longer bare, but has a complete working directory, and I get continuous error messages on all further attempts to push to it. Everything I've read seems to suggest that what I'm doing is correct, but it simply is not working. How am I supposed to push changes to a bare remote repo and actually keep it bare?

    Read the article

  • Merging: hg/git vs. svn

    - by stmax
    I often read that hg (and git and...) are better at merging than svn but I have never seen practical examples of where hg/git can merge something where svn fails (or where svn needs manual intervention). Could you post a few step-by-step lists of branch/modify/commit/...-operations that show where svn would fail while hg/git happily moves on? Practical, not highly exceptional cases please... Some background: we have a few dozen developers working on projects using svn, with each project (or group of similar projects) in its own repo. We know how to apply release- and feature-branches so we don't run into problems very often (i.e. we've been there, but we've learned to overcome joel's problems of "one programmer causing trauma to the whole team" or "needing six developers for two weeks to reintegrate a branch"). We have release-branches that are very stable and only used to apply bugfixes. We have trunks that should be stable enough to be able to create a release within one week. And we have feature-branches that single developers or groups of developers can work on. Yes, they are deleted after reintegration so they don't clutter up the repository. ;) So I'm still trying to find the advantages of hg/git over svn. I'd love to get some hands-on experience, but there aren't any bigger projects we could move to hg/git yet, so I'm stuck with playing with small artifical projects that only contain a few made up files. And I'm looking for a few cases where you can feel the impressive power of hg/git, since so far I have often read about them but failed to find them myself.

    Read the article

  • How to pull one commit at a time from a remote git repository?

    - by Norman Ramsey
    I'm trying to set up a darcs mirror of a git repository. I have something that works OK, but there's a significant problem: if I push a whole bunch of commits to the git repo, those commits get merged into a single darcs patchset. I really want to make sure each git commit gets set up as a single darcs patchset. I bet this is possible by doing some kind of git fetch followed by interrogation of the local copy of the remote branch, but my git fu is not up to the job. Here's the (ksh) code I'm using now, more or less: git pull -v # pulls all the commits from remote --- bad! # gets information about only the last commit pulled -- bad! author="$(git log HEAD^..HEAD --pretty=format:"%an <%ae>")" logfile=$(mktemp) git log HEAD^..HEAD --pretty=format:"%s%n%b%n" > $logfile # add all new files to darcs and record a patchset. this part is OK darcs add -q --umask=0002 -r . darcs record -a -A "$author" --logfile="$logfile" darcs push -a rm -f $logfile My idea is Try git fetch to get local copy of the remote branch (not sure exactly what arguments are needed) Somehow interrogate the local copy to get a hash for every commit since the last mirroring operation (I have no idea how to do this) Loop through all the hashes, pulling just that commit and recording the associated patchset (I'm pretty sure I know how to do this if I get my hands on the hash) I'd welcome either help fleshing out the scenario above or suggestions about something else I should try. Ideas?

    Read the article

  • Running bundle install fails trying to remote fetch from rubygems.org/quick/Marshal...

    - by dreeves
    I'm getting a strange error when doing bundle install: $ bundle install Fetching source index for http://rubygems.org/ rvm/rubies/ree-1.8.7-2010.02/lib/ruby/site_ruby/1.8/rubygems/remote_fetcher.rb:304 :in `open_uri_or_path': bad response Not Found 404 (http://rubygems.org/quick/Marshal.4.8/resque-scheduler-1.09.7.gemspec.rz) (Gem::RemoteFetcher::FetchError) I've tried bundle update, gem source -c, gem update --system, gem cleanup, etc etc. Nothing seems to solve this. I notice that the URL beginning with http://rubygems.org/quick does seem to be a 404 -- I don't think that's any problem with my network, though if that's reachable for anyone else then that would be a simple explanation for my problem. More hints: If I just gem install resque-scheduler it works fine: $ gem install resque-scheduler Successfully installed resque-scheduler-1.9.7 1 gem installed Installing ri documentation for resque-scheduler-1.9.7... Installing RDoc documentation for resque-scheduler-1.9.7... And here's my Gemfile: source 'http://rubygems.org' gem 'json' gem 'rails', '>=3.0.0' gem 'mongo' gem 'mongo_mapper', :git => 'git://github.com/jnunemaker/mongomapper', :branch => 'rails3' gem 'bson_ext', '1.1' gem 'bson', '1.1' gem 'mm-multi-parameter-attributes', :git=>'git://github.com/rlivsey/mm-multi-parameter-attributes.git' gem 'devise', '~>1.1.3' gem 'devise_invitable', '~> 0.3.4' gem 'devise-mongo_mapper', :git => 'git://github.com/collectiveidea/devise-mongo_mapper' gem 'carrierwave', :git => 'git://github.com/rsofaer/carrierwave.git' , :branch => 'master' gem 'mini_magick' gem 'jquery-rails', '>= 0.2.6' gem 'resque' gem 'resque-scheduler' gem 'SystemTimer' gem 'capistrano' gem 'will_paginate', '3.0.pre2' gem 'twitter', '~> 1.0.0' gem 'oauth', '~> 0.4.4'

    Read the article

  • Excel VBA Macro for Pivot Table with Dynamic Data Range

    - by John Ziebro
    CODE IS WORKING! THANKS FOR THE HELP! I am attempting to create a dynamic pivot table that will work on data that varies in the number of rows. Currently, I have 28,300 rows, but this may change daily. Example of data format as follows: Case Number Branch Driver 1342 NYC Bob 4532 PHL Jim 7391 CIN John 8251 SAN John 7211 SAN Mary 9121 CLE John 7424 CIN John Example of finished table: Driver NYC PHL CIN SAN CLE Bob 1 0 0 0 0 Jim 0 1 0 0 0 John 0 0 2 1 1 Mary 0 0 0 1 0 Code as follows: Sub CreateSummaryReportUsingPivot() ' Use a Pivot Table to create a static summary report ' with model going down the rows and regions across Dim WSD As Worksheet Dim PTCache As PivotCache Dim PT As PivotTable Dim PRange As Range Dim FinalRow As Long Dim FinalCol As Long Set WSD = Worksheets("PivotTable") 'Name active worksheet as "PivotTable" ActiveSheet.Name = "PivotTable" ' Delete any prior pivot tables For Each PT In WSD.PivotTables PT.TableRange2.Clear Next PT ' Define input area and set up a Pivot Cache FinalRow = WSD.Cells(Application.Rows.Count, 1).End(xlUp).Row FinalCol = WSD.Cells(1, Application.Columns.Count). _ End(xlToLeft).Column Set PRange = WSD.Cells(1, 1).Resize(FinalRow, FinalCol) Set PTCache = ActiveWorkbook.PivotCaches.Add(SourceType:= _ xlDatabase, SourceData:=PRange) ' Create the Pivot Table from the Pivot Cache Set PT = PTCache.CreatePivotTable(TableDestination:=WSD. _ Cells(2, FinalCol + 2), TableName:="PivotTable1") ' Turn off updating while building the table PT.ManualUpdate = True ' Set up the row fields PT.AddFields RowFields:="Driver", ColumnFields:="Branch" ' Set up the data fields With PT.PivotFields("Case Number") .Orientation = xlDataField .Function = xlCount .Position = 1 End With With PT .ColumnGrand = False .RowGrand = False .NullString = "0" End With ' Calc the pivot table PT.ManualUpdate = False PT.ManualUpdate = True End Sub

    Read the article

  • I need to remove Java Script tags using regular expressions and JRegex

    - by piotr
    I need to remove all the Java Script tags and the content in between and style tags from the HTML code of web pages.So far I've come up with this expression : "(<[ \r\n\t]script([ \r\n\t]|){1,}([ \r\n\t]|.)?)|(<[ \r\n\t]noscript([ \r\n\t]|){1,}([ \r\n\t]|.)?)|(<[ \r\n\t]style([ \r\n\t]|){1,}([ \r\n\t]|.)?)" I use JRegex library to work with regular expressions. When I test it in any regex tester it works just fine, but once I run my program - it all crashes down with this error report: Exception in thread "Thread-0" java.lang.StackOverflowError at java.util.regex.Pattern$BranchConn.match(Unknown Source) at java.util.regex.Pattern$BmpCharProperty.match(Unknown Source) at java.util.regex.Pattern$Branch.match(Unknown Source) at java.util.regex.Pattern$GroupHead.match(Unknown Source) at java.util.regex.Pattern$LazyLoop.match(Unknown Source) at java.util.regex.Pattern$GroupTail.match(Unknown Source) at java.util.regex.Pattern$BranchConn.match(Unknown Source) at java.util.regex.Pattern$CharProperty.match(Unknown Source) at java.util.regex.Pattern$Branch.match(Unknown Source) at java.util.regex.Pattern$GroupHead.match(Unknown Source) at java.util.regex.Pattern$LazyLoop.match(Unknown Source) .................................. And it keeps on going forever. If anyone can give me an advice on this one - I'll be very grateful.

    Read the article

  • Please help me with a Git workflow

    - by aaron carlino
    I'm an SVN user hoping to move to Git. I've been reading documentation and tutorials all day, and I still have unanswered questions. I don't know if this workflow will make sense, but here's my situation, and what I would like to get out of my workflow: Multiple developers, all developing locally on their work stations 3 versions of the website: Dev, Staging, Production Here's my dream: A developer works locally on his own branch, say "developer1", tests on his local machine, and commits his changes. Another developer can pull down those changes into his own branch. Merge developer1 - developer2. When the work is ready to be seen by the public, I'd like to be able to "push" to Dev, Staging, or Production. git push origin staging or maybe.. git merge developer1 staging I'm not sure. Like I said, I'm still new to it. Here are my main questions: -Do my websites (Dev, Staging, Production) have to be repositories? And do they have to be "bare" in order to be the recipients of new changes? -Do I want one repository or many, with several branches? -Does this even make sense, or am I on the wrong path? I've read a lot of tutorials, so I'm really hoping someone can just help me out with my specific situation. Thanks so much!

    Read the article

  • Does Github.com have to create a merge commit when you merge from a fork ?

    - by Nishant
    I cloned the master and started doing he my work . Due to permissions I push the branch to my fork . I then sent a pull request to my master and someone with permission does the merge . I notice that Github.com creates a merge commit snapshot which to me looks like just a diff of the entire changes which is actually not necessary but helpful in the sense I can just look at merge commit to see the entire diff . I can see the same sha has as my own branch - hence it looks like the merge is an extra commit which probably aint nexeccary since its a fast forward ? master - a myfork(computer) - a->b->c myfork(github) - a->b->c Pull request myfork - master (which it says I can automatically merge) shows the entire diff and then when I merge it , it shows up as master - a->b->c-d . The d is a merge commit which I think it not really required because it is a fast forward ? Can someone explain why does this happen ? I think this is the same scenario if I rebase master if master had gone ahead , but that has not happened . Master is still at when I merge .

    Read the article

  • How do I protect the trunk from hapless newbies?

    - by Michael Haren
    A coworker relayed the following problem, let's say it's fictional to protect the guilty: A team of 5-10 works on a project which is issue-driven. That is, the typical flow goes like this: a chunk of work (bug, enhancement, etc.) is created as an issue in the issue tracker The issue is assigned to a developer The developer resolves the issue and commits their code changes to the trunk At release time, the frozen, and heavily tested trunk or release branch or whatever is built in release mode and released The problem he's having is that a couple newbies made several bad commits that weren't caught due to an unfortunate chain of events. This was followed by a bad release with a rollback or flurry of hot fixes. One idea we're toying with: Revoke commit access to the trunk for newbies and make them develop on a per-developer branch (we're using SVN): Good: newbies are isolated and can't hurt others Good: committers merge newbie branches with the trunk frequently Good: this enforces rigid code reviews Bad: this is burdensome on the committers (but there's probably no way around it since the code needs reviewed!) Bad: it might make traceability of trunk changes a little tougher since the reviewer would be doing the commit--not too sure on this. Update: Thank you, everyone, for your valuable input. I have concluded that this is far less a code/coder problem than I first presented. The root of the issue is that the release procedure failed to capture and test some poor quality changes to the trunk. Plugging that hole is most important. Relying on the false assumption that code in the trunk is "good" is not the solution. Once that hole--testing--is plugged, mistakes by everyone--newbie or senior--will be caught properly and dealt with accordingly. Next, a greater emphasis on code reviews and mentorship (probably driven by some systematic changes to encourage it) will go a long way toward improving code quality. With those two fixes in place, I don't think something as rigid or draconian as what I proposed above is necessary. Thanks!

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >