Search Results

Search found 11052 results on 443 pages for 'linked tables'.

Page 180/443 | < Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >

  • undefined method `code' for nil:NilClass message with rails and a legacy database

    - by Jude Osborn
    I'm setting up a very simple rails 3 application to view data in a legacy MySQL database. The legacy database is mostly rails ORM compatible, except that foreign key fields are pluralized. For example, my "orders" table has a foreign key field to the "companies" table called "companies_id" (rather than "company_id"). So naturally I'm having to use the ":foreign_key" attribute of "belongs_to" to set the field name manually. I haven't used rails in a few years, but I'm pretty sure I'm doing everything right, yet I get the following error when trying to access "order.currency.code": undefined method `code' for nil:NilClass This is a very simple application so far. The only thing I've done is generate the application and a bunch of scaffolds for each of the legacy database tables. Then I've gone into some of the models to make adjustments to accommodate the above mentioned difference in database naming conventions, and added some fields to the views. That's it. No funny business. So my database tables look like this (relevant fields only): orders ------ id description invoice_number currencies_id currencies ---------- id code description My Order model looks like this: class Order < ActiveRecord::Base belongs_to :currency, :foreign_key=>'currencies_id' end My Currency model looks like this: class Currency < ActiveRecord::Base has_many :orders end The relevant view snippet looks like this: <% @orders.each do |order| %> <tr> <td><%= order.description %></td> <td><%= order.invoice_number %></td> <td><%= order.currency.code %></td> </tr> <% end %> I'm completely out of ideas. Any suggestions?

    Read the article

  • MySQL Check and compare and if necessary create data.

    - by user2979677
    Hello I have three different tables Talk ID type_id YEAR NUM NUM_LETTER DATE series_id TALK speaker_id SCRIPREF DONE DOUBLE_SID DOUBLE_TYP LOCAL_CODE MISSING RESTRICTED WHY_RESTRI SR_S_1 SR_E_1 BCV_1 SR_S_2 SR_E_2 BCV_2 SR_S_3 SR_E_3 BCV_3 SR_S_4 SR_E_4 BCV_4 QTY_IV organisation_id recommended topic_id thumbnail mp3_file_size duration version Product_component id product_id talk_id position version Product id created product_type_id last_modified last_modified_by num_sold current_stock min_stock max_stock comment organisation_id series_id name subscription_type_id recipient_id discount_start discount_amount discount_desc discount_finish discount_percent voucher_amount audio_points talk_id price_override product_desc instant_download_status_id downloads instant_downloads promote_start promote_finish promote_desc restricted from_tape discontinued discontinued_reason discontinued_date external_url version I want to create a procedure that will check if select the id from talk and compare it with the id of product to see if there is a product id in the table and if there isn't then create it however my problem is that my tables talk and product can't talk, as id in talk is related to talk_id in product_component and id in product is related to product_id in product_component. Are there any ways for this to be done? I tried this, CREATE DEFINER=`sthelensmedia`@`localhost` PROCEDURE `CreateSingleCDProducts`() BEGIN DECLARE t_id INT; DECLARE t_restricted BOOLEAN; DECLARE t_talk CHAR(255); DECLARE t_series_id INT; DECLARE p_id INT; DECLARE done INT DEFAULT 0; DECLARE cur1 CURSOR FOR SELECT id,restricted,talk,series_id FROM talk WHERE organisation_id=2; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=1; UPDATE product SET restricted=TRUE WHERE product_type_id=3; OPEN cur1; create_loop: LOOP FETCH cur1 INTO t_id, t_restricted, t_talk, t_series_id; IF done=1 THEN LEAVE create_loop; END IF; INSERT INTO product (created,product_type_id,last_modified,organisation_id,series_id,name,restricted) VALUES (NOW(),3,NOW(),2,t_series_id,t_talk,t_restricted); SELECT LAST_INSERT_ID() INTO p_id; INSERT INTO product_component (product_id,talk_id,position) VALUES (p_id,t_id,0); END LOOP create_loop; CLOSE cur1; END Just wondering if anyone could help me.

    Read the article

  • I’m screwing up this LEFT JOIN

    - by Jason Shultz
    I'm doing a left join on several tables. What I want to happen is that it list all the businesses. Then, it looks through the photos, videos, specials and categories. IF there are photos, then the tables shows yes, if there are videos, it shows yes in the table. It does all that without any problems. Except for one thing. For every photo, it shows the business that many times. For example, if there are 5 photos in the DB for a business, it shows the business five times. Obviously, this is not what I want to happen. Can you help? function frontPageList() { $this->db->select('b.id, b.busname, b.busowner, b.webaddress, p.thumb, v.title, c.catname'); $this->db->from ('business AS b'); $this->db->where('b.featured', '1'); $this->db->join('photos AS p', 'p.busid = b.id', 'left'); $this->db->join('video AS v', 'v.busid = b.id', 'left'); $this->db->join('specials AS s', 's.busid = b.id', 'left'); $this->db->join('category As c', 'b.category = c.id', 'left'); return $this->db->get();

    Read the article

  • What are advantages of using a one-to-one table relationship? (MySQL)

    - by byronh
    What are advantages of using a one-to-one table relationship as opposed to simply storing all the data in one table? I understand and make use of one-to-many, many-to-one, and many-to-many all the time, but implementing a one-to-one relationship seems like a tedious and unnecessary task, especially if you use naming conventions for relating (php) objects to database tables. I couldn't find anything on the net or on this site that could supply a good real-world example of a one-to-one relationship. At first I thought it might be logical to separate 'users', for example, into two tables, one containing public information like an 'about me' for profile pages and one containing private information such as login/password, etc. But why go through all the trouble of using unnecessary JOINS when you can just choose which fields to select from that table anyway? If I'm displaying the user's profile page, obviously I would only SELECT id,username,email,aboutme etc. and not the fields containing their private info. Anyone care to enlighten me with some real-world examples of one-to-one relationships?

    Read the article

  • # how to split the column values using stored procedure

    - by user1444281
    I have two tables table 1 is SELECT * FROM dbo.TBL_WD_WEB_DECK WD_ID WD_TITLE ------------------ 1 2 and data in 2nd table is WS_ID WS_WEBPAGE_ID WS_SPONSORS_ID WS_STATUS ----------------------------------------------------- 1 1 1,2,3,4 Y I wrote the following stored procedure to insert the data into both the tables catching the identity of main table dbo.TBL_WD_WEB_DECK. WD_ID is related with WS_WEBPAGE_ID. I wrote the cursor in update action to split the WS_SPONSORS_ID column calling the split function in the cursor. But it is not working. The stored procedure is: ALTER procedure [dbo].[SP_Example_SPLIT] ( @action char(1), @wd_id int out, @wd_title varchar(50), @ws_webpage_id int, @ws_sponsors_id varchar(250), @ws_status char(1) ) as begin set nocount on IF (@action = 'A') BEGIN INSERT INTO dbo.TBL_WD_WEB_DECK(WD_TITLE)VALUES(@WD_TITLE) DECLARE @X INT SET @X = @@IDENTITY INSERT INTO dbo.TBL_WD_SPONSORS(WS_WEBPAGE_ID,WS_SPONSORS_ID,WS_STATUS) VALUES (@ws_webpage_id,@ws_sponsors_id,@ws_status) END ELSE IF (@action = 'U') BEGIN UPDATE dbo.TBL_WD_WEB_DECK SET WD_TITLE = @wd_title WHERE WD_ID = @WD_ID UPDATE dbo.TBL_WD_SPONSORS SET WS_STATUS = 'N' where WS_SPONSORS_ID not in (@ws_sponsors_id) and ws_webpage_id = @wd_id BEGIN /* Declaring Cursor to split the value in ws_sponsors_id column */ Declare @var int Declare splt cursor for /* used the split function and calling the parameter in that split function */ select * from iter_simple_intlist_to_tbl(@WS_SPONSORS_ID) OPEN splt FETCH NEXT FROM splt INTO @var WHILE (@@FETCH_STATUS = 0) begin if not exists(select * from dbo.TBL_WD_SPONSORS where WS_WEBPAGE_ID = @wd_id and WS_SPONSORS_ID = @var) begin insert into dbo.TBL_WD_SPONSORS (WS_WEBPAGE_ID,WS_SPONSORS_ID) values(@wd_id,@var) end end CLOSE SPONSOR DEALLOCATE SPONSOR END END END The result I want is if I insert the data in WD_ID and IN WS_SPONSORS_ID column the data in the WS_SPONSORS_ID column should split and I need to compare it with WD_ID. The result I need is: WD_ID WD_TITLE --------------------- 1 TEST 2 TEST1 3 WS_ID WS_WEBPAGE_ID WS_SPONSORS_ID WS_STATUS -------------------------------------------------------- 1 1 1 Y 2 1 2 N 3 1 3 Y If I pass the string in WS_SPONSORS_ID as 1,2,3 it has to split like the above. Can you help?

    Read the article

  • The case of the phantom ADF developer (and other yarns)

    - by Chris Muir
    A few years of ADF experience means I see common mistakes made by different developers, some I regularly make myself.  This post is designed to assist beginners to Oracle JDeveloper Application Development Framework (ADF) avoid a common ADF pitfall, the case of the phantom ADF developer [add Scooby-Doo music here]. ADF Business Components - triggers, default table values and instead of views. Oracle's JDeveloper tutorials help with the A-B-Cs of ADF development, typically built on the nice 'n safe demo schema provided by with the Oracle database such as the HR demo schema. However it's not too long until ADF beginners, having built up some confidence from learning with the tutorials and vanilla demo schemas, start building ADF Business Components based upon their own existing database schema objects.  This is where unexpected problems can sneak in. The crime Developers may encounter a surprising error at runtime when editing a record they just created or updated and committed to the database, based on their own existing tables, namely the error: JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[x] ...where X is the primary key value of the row at hand.  In a production environment with multiple users this error may be legit, one of the other users has updated the row since you queried it.  Yet in a development environment this error is just plain confusing.  If developers are isolated in their own database, creating and editing records they know other users can't possibly be working with, or all the other developers have gone home for the day, how is this error possible? There are no other users?  It must be the phantom ADF developer! [insert dramatic music here] The following picture is what you'll see in the Business Component Browser, and you'll receive a similar error message via an ADF Faces page: A false conclusion What can possibly cause this issue if it isn't our phantom ADF developer?  Doesn't ADF BC implement record locking, locking database records when the row is modified in the ADF middle-tier by a user?  How can our phantom ADF developer even take out a lock if this is the case?  Maybe ADF has a bug, maybe ADF isn't implementing record locking at all?  Shouldn't we see the error "JBO-26030: Failed to lock the record, another user holds the lock" as we attempt to modify the record, why do we see JBO-25014? : Let's verify that ADF is in fact issuing the correct SQL LOCK-FOR-UPDATE statement to the database. First we need to verify ADF's locking strategy.  It is determined by the Application Module's jbo.locking.mode property.  The default (as of JDev 11.1.1.4.0 if memory serves me correct) and recommended value is optimistic, and the other valid value is pessimistic. Next we need a mechanism to check that ADF is issuing the LOCK statements to the database.  We could ask DBAs to monitor locks with OEM, but optimally we'd rather not involve overworked DBAs in this process, so instead we can use the ADF runtime setting –Djbo.debugoutput=console.  At runtime this options turns on instrumentation within the ADF BC layer, which among a lot of extra detail displayed in the log window, will show the actual SQL statement issued to the database, including the LOCK statement we're looking to confirm. Setting our locking mode to pessimistic, opening the Business Components Browser of a JSF page allowing us to edit a record, say the CHARGEABLE field within a BOOKINGS record where BOOKING_NO = 1206, upon editing the record see among others the following log entries: [421] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[422] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[423] Where binding param 1: 1206  As can be seen on line 422, in fact a LOCK-FOR-UPDATE is indeed issued to the database.  Later when we commit the record we see: [441] OracleSQLBuilder: SAVEPOINT 'BO_SP'[442] OracleSQLBuilder Executing, Lock 1 DML on: BOOKINGS (Update)[443] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[444] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[445] Update binding param 1: N[446] Where binding param 2: 1206[447] BookingsView1 notify COMMIT ... [448] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [449] EntityCache close prepared statement ....and as a result the changes are saved to the database, and the lock is released. Let's see what happens when we use the optimistic locking mode, this time to change the same BOOKINGS record CHARGEABLE column again.  As soon as we edit the record we see little activity in the logs, nothing to indicate any SQL statement, let alone a LOCK has been taken out on the row. However when we save our records by issuing a commit, the following is recorded in the logs: [509] OracleSQLBuilder: SAVEPOINT 'BO_SP'[510] OracleSQLBuilder Executing doEntitySelect on: BOOKINGS (true)[511] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[512] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[513] Where binding param 1: 1205[514] OracleSQLBuilder Executing, Lock 2 DML on: BOOKINGS (Update)[515] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[516] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[517] Update binding param 1: Y[518] Where binding param 2: 1205[519] BookingsView1 notify COMMIT ... [520] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [521] EntityCache close prepared statement Again even though we're seeing the midtier delay the LOCK statement until commit time, it is in fact occurring on line 412, and released as part of the commit issued on line 419.  Therefore with either optimistic or pessimistic locking a lock is indeed issued. Our conclusion at this point must be, unless there's the unlikely cause the LOCK statement is never really hitting the database, or the even less likely cause the database has a bug, then ADF does in fact take out a lock on the record before allowing the current user to update it.  So there's no way our phantom ADF developer could even modify the record if he tried without at least someone receiving a lock error. Hmm, we can only conclude the locking mode is a red herring and not the true cause of our problem.  Who is the phantom? At this point we'll need to conclude that the error message "JBO-25014: Another user has changed" is somehow legit, even though we don't understand yet what's causing it. This leads onto two further questions, how does ADF know another user has changed the row, and what's been changed anyway? To answer the first question, how does ADF know another user has changed the row, the Fusion Guide's section 4.10.11 How to Protect Against Losing Simultaneous Updated Data , that details the Entity Object Change-Indicator property, gives us the answer: At runtime the framework provides automatic "lost update" detection for entity objects to ensure that a user cannot unknowingly modify data that another user has updated and committed in the meantime. Typically, this check is performed by comparing the original values of each persistent entity attribute against the corresponding current column values in the database at the time the underlying row is locked. Before updating a row, the entity object verifies that the row to be updated is still consistent with the current state of the database.  The guide further suggests to make this solution more efficient: You can make the lost update detection more efficient by identifying any attributes of your entity whose values you know will be updated whenever the entity is modified. Typical candidates include a version number column or an updated date column in the row.....To detect whether the row has been modified since the user queried it in the most efficient way, select the Change Indicator option to compare only the change-indicator attribute values. We now know that ADF BC doesn't use the locking mechanism at all to protect the current user against updates, but rather it keeps a copy of the original record fetched, separate to the user changed version of the record, and it compares the original record against the one in the database when the lock is taken out.  If values don't match, be it the default compare-all-columns behaviour, or the more efficient Change Indicator mechanism, ADF BC will throw the JBO-25014 error. This leaves one last question.  Now we know the mechanism under which ADF identifies a changed row, what we don't know is what's changed and who changed it? The real culprit What's changed?  We know the record in the mid-tier has been changed by the user, however ADF doesn't use the changed record in the mid-tier to compare to the database record, but rather a copy of the original record before it was changed.  This leaves us to conclude the database record has changed, but how and by who? There are three potential causes: Database triggers The database trigger among other uses, can be configured to fire PLSQL code on a database table insert, update or delete.  In particular in an insert or update the trigger can override the value assigned to a particular column.  The trigger execution is actioned by the database on behalf of the user initiating the insert or update action. Why this causes the issue specific to our ADF use, is when we insert or update a record in the database via ADF, ADF keeps a copy of the record written to the database.  However the cached record is instantly out of date as the database triggers have modified the record that was actually written to the database.  Thus when we update the record we just inserted or updated for a second time to the database, ADF compares its original copy of the record to that in the database, and it detects the record has been changed – giving us JBO-25014. This is probably the most common cause of this problem. Default values A second reason this issue can occur is another database feature, default column values.  When creating a database table the schema designer can define default values for specific columns.  For example a CREATED_BY column could be set to SYSDATE, or a flag column to Y or N.  Default values are only used by the database when a user inserts a new record and the specific column is assigned NULL.  The database in this case will overwrite the column with the default value. As per the database trigger section, it then becomes apparent why ADF chokes on this feature, though it can only specifically occur in an insert-commit-update-commit scenario, not the update-commit-update-commit scenario. Instead of trigger views I must admit I haven't double checked this scenario but it seems plausible, that of the Oracle database's instead of trigger view (sometimes referred to as instead of views).  A view in the database is based on a query, and dependent on the queries complexity, may support insert, update and delete functionality to a limited degree.  In order to support fully insertable, updateable and deletable views, Oracle introduced the instead of view, that gives the view designer the ability to not only define the view query, but a set of programmatic PLSQL triggers where the developer can define their own logic for inserts, updates and deletes. While this provides the database programmer a very powerful feature, it can cause issues for our ADF application.  On inserting or updating a record in the instead of view, the record and it's data that goes in is not necessarily the data that comes out when ADF compares the records, as the view developer has the option to practically do anything with the incoming data, including throwing it away or pushing it to tables which aren't used by the view underlying query for fetching the data. Readers are at this point reminded that this article is specifically about how the JBO-25014 error occurs in the context of 1 developer on an isolated database.  The article is not considering how the error occurs in a production environment where there are multiple users who can cause this error in a legitimate fashion.  Assuming none of the above features are the cause of the problem, and optimistic locking is turned on (this error is not possible if pessimistic locking is the default mode *and* none of the previous causes are possible), JBO-25014 is quite feasible in a production ADF application if 2 users modify the same record. At this point under project timelines pressure, the obvious fix for developers is to drop both database triggers and default values from the underlying tables.  However we must be careful that these legacy constructs aren't used and assumed to be in place by other legacy systems.  Dropping the database triggers or default value that the existing Oracle Forms  applications assumes and requires to be in place could cause unexpected behaviour and bugs in the Forms application.  Proficient software engineers would recognize such a change may require a partial or full regression test of the existing legacy system, a potentially costly and timely exercise, not ideal. Solving the mystery once and for all Luckily ADF has built in functionality to deal with this issue, though it's not a surprise, as Oracle as the author of ADF also built the database, and are fully aware of the Oracle database's feature set.  At the Entity Object attribute level, the Refresh After Insert and Refresh After Update properties.  Simply selecting these instructs ADF BC after inserting or updating a record to the database, to expect the database to modify the said attributes, and read a copy of the changed attributes back into its cached mid-tier record.  Thus next time the developer modifies the current record, the comparison between the mid-tier record and the database record match, and JBO-25014: Another user has changed" is no longer an issue. [Post edit - as per the comment from Oracle's Steven Davelaar below, as he correctly points out the above solution will not work for instead-of-triggers views as it relies on SQL RETURNING clause which is incompatible with this type of view] Alternatively you can set the Change Indicator on one of the attributes.  This will work as long as the relating column for the attribute in the database itself isn't inadvertently updated.  In turn you're possibly just masking the issue rather than solving it, because if another developer turns the Change Indicator back on the original issue will return.

    Read the article

  • MERGE Bug with Filtered Indexes

    - by Paul White
    A MERGE statement can fail, and incorrectly report a unique key violation when: The target table uses a unique filtered index; and No key column of the filtered index is updated; and A column from the filtering condition is updated; and Transient key violations are possible Example Tables Say we have two tables, one that is the target of a MERGE statement, and another that contains updates to be applied to the target.  The target table contains three columns, an integer primary key, a single character alternate key, and a status code column.  A filtered unique index exists on the alternate key, but is only enforced where the status code is ‘a’: CREATE TABLE #Target ( pk integer NOT NULL, ak character(1) NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) );   CREATE UNIQUE INDEX uq1 ON #Target (ak) INCLUDE (status_code) WHERE status_code = 'a'; The changes table contains just an integer primary key (to identify the target row to change) and the new status code: CREATE TABLE #Changes ( pk integer NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) ); Sample Data The sample data for the example is: INSERT #Target (pk, ak, status_code) VALUES (1, 'A', 'a'), (2, 'B', 'a'), (3, 'C', 'a'), (4, 'A', 'd');   INSERT #Changes (pk, status_code) VALUES (1, 'd'), (4, 'a');          Target                     Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ a           ¦    ¦  1 ¦ d           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ d           ¦ +-----------------------+ The target table’s alternate key (ak) column is unique, for rows where status_code = ‘a’.  Applying the changes to the target will change row 1 from status ‘a’ to status ‘d’, and row 4 from status ‘d’ to status ‘a’.  The result of applying all the changes will still satisfy the filtered unique index, because the ‘A’ in row 1 will be deleted from the index and the ‘A’ in row 4 will be added. Merge Test One Let’s now execute a MERGE statement to apply the changes: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; The MERGE changes the two target rows as expected.  The updated target table now contains: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦ <—changed from ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦ <—changed from ‘d’ +-----------------------+ Merge Test Two Now let’s repopulate the changes table to reverse the updates we just performed: TRUNCATE TABLE #Changes;   INSERT #Changes (pk, status_code) VALUES (1, 'a'), (4, 'd'); This will change row 1 back to status ‘a’ and row 4 back to status ‘d’.  As a reminder, the current state of the tables is:          Target                        Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ d           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ a           ¦ +-----------------------+ We execute the same MERGE statement: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; However this time we receive the following message: Msg 2601, Level 14, State 1, Line 1 Cannot insert duplicate key row in object 'dbo.#Target' with unique index 'uq1'. The duplicate key value is (A). The statement has been terminated. Applying the changes using UPDATE Let’s now rewrite the MERGE to use UPDATE instead: UPDATE t SET status_code = c.status_code FROM #Target AS t JOIN #Changes AS c ON t.pk = c.pk WHERE c.status_code <> t.status_code; This query succeeds where the MERGE failed.  The two rows are updated as expected: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ a           ¦ <—changed back to ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ d           ¦ <—changed back to ‘d’ +-----------------------+ What went wrong with the MERGE? In this test, the MERGE query execution happens to apply the changes in the order of the ‘pk’ column. In test one, this was not a problem: row 1 is removed from the unique filtered index by changing status_code from ‘a’ to ‘d’ before row 4 is added.  At no point does the table contain two rows where ak = ‘A’ and status_code = ‘a’. In test two, however, the first change was to change row 1 from status ‘d’ to status ‘a’.  This change means there would be two rows in the filtered unique index where ak = ‘A’ (both row 1 and row 4 meet the index filtering criteria ‘status_code = a’). The storage engine does not allow the query processor to violate a unique key (unless IGNORE_DUP_KEY is ON, but that is a different story, and doesn’t apply to MERGE in any case).  This strict rule applies regardless of the fact that if all changes were applied, there would be no unique key violation (row 4 would eventually be changed from ‘a’ to ‘d’, removing it from the filtered unique index, and resolving the key violation). Why it went wrong The query optimizer usually detects when this sort of temporary uniqueness violation could occur, and builds a plan that avoids the issue.  I wrote about this a couple of years ago in my post Beware Sneaky Reads with Unique Indexes (you can read more about the details on pages 495-497 of Microsoft SQL Server 2008 Internals or in Craig Freedman’s blog post on maintaining unique indexes).  To summarize though, the optimizer introduces Split, Filter, Sort, and Collapse operators into the query plan to: Split each row update into delete followed by an inserts Filter out rows that would not change the index (due to the filter on the index, or a non-updating update) Sort the resulting stream by index key, with deletes before inserts Collapse delete/insert pairs on the same index key back into an update The effect of all this is that only net changes are applied to an index (as one or more insert, update, and/or delete operations).  In this case, the net effect is a single update of the filtered unique index: changing the row for ak = ‘A’ from pk = 4 to pk = 1.  In case that is less than 100% clear, let’s look at the operation in test two again:          Target                     Changes                   Result +-----------------------+    +------------------+    +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦    ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦    ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ d           ¦    ¦  1 ¦ A  ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦    ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+    ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦                            ¦  4 ¦ A  ¦ d           ¦ +-----------------------+                            +-----------------------+ From the filtered index’s point of view (filtered for status_code = ‘a’ and shown in nonclustered index key order) the overall effect of the query is:   Before           After +---------+    +---------+ ¦ pk ¦ ak ¦    ¦ pk ¦ ak ¦ ¦----+----¦    ¦----+----¦ ¦  4 ¦ A  ¦    ¦  1 ¦ A  ¦ ¦  2 ¦ B  ¦    ¦  2 ¦ B  ¦ ¦  3 ¦ C  ¦    ¦  3 ¦ C  ¦ +---------+    +---------+ The single net change there is a change of pk from 4 to 1 for the nonclustered index entry ak = ‘A’.  This is the magic performed by the split, sort, and collapse.  Notice in particular how the original changes to the index key (on the ‘ak’ column) have been transformed into an update of a non-key column (pk is included in the nonclustered index).  By not updating any nonclustered index keys, we are guaranteed to avoid transient key violations. The Execution Plans The estimated MERGE execution plan that produces the incorrect key-violation error looks like this (click to enlarge in a new window): The successful UPDATE execution plan is (click to enlarge in a new window): The MERGE execution plan is a narrow (per-row) update.  The single Clustered Index Merge operator maintains both the clustered index and the filtered nonclustered index.  The UPDATE plan is a wide (per-index) update.  The clustered index is maintained first, then the Split, Filter, Sort, Collapse sequence is applied before the nonclustered index is separately maintained. There is always a wide update plan for any query that modifies the database. The narrow form is a performance optimization where the number of rows is expected to be relatively small, and is not available for all operations.  One of the operations that should disallow a narrow plan is maintaining a unique index where intermediate key violations could occur. Workarounds The MERGE can be made to work (producing a wide update plan with split, sort, and collapse) by: Adding all columns referenced in the filtered index’s WHERE clause to the index key (INCLUDE is not sufficient); or Executing the query with trace flag 8790 set e.g. OPTION (QUERYTRACEON 8790). Undocumented trace flag 8790 forces a wide update plan for any data-changing query (remember that a wide update plan is always possible).  Either change will produce a successfully-executing wide update plan for the MERGE that failed previously. Conclusion The optimizer fails to spot the possibility of transient unique key violations with MERGE under the conditions listed at the start of this post.  It incorrectly chooses a narrow plan for the MERGE, which cannot provide the protection of a split/sort/collapse sequence for the nonclustered index maintenance. The MERGE plan may fail at execution time depending on the order in which rows are processed, and the distribution of data in the database.  Worse, a previously solid MERGE query may suddenly start to fail unpredictably if a filtered unique index is added to the merge target table at any point. Connect bug filed here Tests performed on SQL Server 2012 SP1 CUI (build 11.0.3321) x64 Developer Edition © 2012 Paul White – All Rights Reserved Twitter: @SQL_Kiwi Email: [email protected]

    Read the article

  • CodePlex Daily Summary for Monday, February 21, 2011

    CodePlex Daily Summary for Monday, February 21, 2011Popular ReleasesA2Command: 2011-02-21 - Version 1.0: IntroductionThis is the full release version of A2Command 1.0, dated February 21, 2011. These notes supersede any prior version's notes. All prior releases may be found on the project's website at http://a2command.codeplex.com/releases/ where you can read the release notes for older versions as well as download them. This version of A2Command is intended to replace any previous version you may have downloaded in the past. There were several bug fixes made after Release Candidate 2 and all...Chiave File Encryption: Chiave 0.9: Application for file encryption and decryption using 512 Bit rijndael encyrption algorithm with simple to use UI. Its written in C# and compiled in .Net version 3.5. It incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass. Feedbacks are Welcome!....Rawr: Rawr 4.0.20 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...Azure Storage Samples: Version 1.0 (February 2011): These downloads contain source code. Each is a complete sample that fully exercises Windows Azure Storage across blobs, queues, and tables. The difference between the downloads is implementation approach. Storage DotNet CS.zip is a .NET StorageClient library implementation in the C# language. This library come with the Windows Azure SDK. Contains helper classes for accessing blobs, queues, and tables. Storage REST CS.zip is a REST implementation in the C# language. The code to implement R...MiniTwitter: 1.66: MiniTwitter 1.66 ???? ?? ?????????? 2 ??????????????????? User Streams ?????????View Layout Replicator for Microsoft Dynamics CRM 2011: View Layout Replicator (1.0.119.18): Initial releaseWindows Phone 7 Isolated Storage Explorer: WP7 Isolated Storage Explorer v1.0 Beta: Current release features:WPF desktop explorer client Visual Studio integrated tool window explorer client (Visual Studio 2010 Professional and above) Supported operations: Refresh (isolated storage information), Add Folder, Add Existing Item, Download File, Delete Folder, Delete File Explorer supports operations running on multiple remote applications at the same time Explorer detects application disconnect (1-2 second delay) Explorer confirms operation completed status Explorer d...Advanced Explorer for Wp7: Advanced Explorer for Wp7 Version 1.4 Test8: Added option to run under Lockscreen. Fixed a bug when you open a pdf/mobi file without starting adobe reader/amazon kindle first boost loading time for folders added \Windows directory (all devices) you can now interact with the filesystem while it is loading!Game Files Open - Map Editor: Game Files Open - Map Editor Beta 2 v1.0.0.0: The 2° beta release of the Map Editor, we have fixed a big bug of the files regen.Document.Editor: 2011.6: Whats new for Document.Editor 2011.6: New Left to Right and Left to Right support New Indent more/less support Improved Home tab Improved Tooltips/shortcut keys Minor Bug Fix's, improvements and speed upsCatel - WPF and Silverlight MVVM library: 1.2: Catel history ============= (+) Added (*) Changed (-) Removed (x) Error / bug (fix) For more information about issues or new feature requests, please visit: http://catel.codeplex.com =========== Version 1.2 =========== Release date: ============= 2011/02/17 Added/fixed: ============ (+) DataObjectBase now supports Isolated Storage out of the box: Person.Save(myStream) stores a whole object graph in Silverlight (+) DataObjectBase can now be converted to Json via Person.ToJson(); (+)...??????????: All-In-One Code Framework ??? 2011-02-18: ?????All-In-One Code Framework?2011??????????!!http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ?????,?????AzureBingMaps??????,??Azure,WCF, Silverlight, Window Phone????????,????????????????????????。 ???: Windows Azure SQL Azure Windows Azure AppFabric Windows Live Messenger Connect Bing Maps ?????: ??????HTML??? ??Windows PC?Mac?Silverlight??? ??Windows Phone?Silverlight??? ?????:http://blog.csdn.net/sjb5201/archive/2011...Image.Viewer: 2011: First version of 2011Silverlight Toolkit: Silverlight for Windows Phone Toolkit - Feb 2011: Silverlight for Windows Phone Toolkit OverviewSilverlight for Windows Phone Toolkit offers developers additional controls for Windows Phone application development, designed to match the rich user experience of the Windows Phone 7. Suggestions? Features? Questions? Ask questions in the Create.msdn.com forum. Add bugs or feature requests to the Issue Tracker. Help us shape the Silverlight Toolkit with your feedback! Please clearly indicate that the work items and issues are for the phone t...VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 29 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) Build 29 (beta)New: Added VsTortoise Solution Explorer integration for Web Project Folder, Web Folder and Web Item. Fix: TortoiseProc was called with invalid parameters, when using TSVN 1.4.x or older #7338 (thanks psifive) Fix: Add-in does not work, when "TortoiseSVN/bin" is not added to PATH environment variable #7357 Fix: Missing error message when ...Sense/Net CMS - Enterprise Content Management: SenseNet 6.0.3 Community Edition: Sense/Net 6.0.3 Community Edition We are happy to introduce you the latest version of Sense/Net with integrated ECM Workflow capabilities! In the past weeks we have been working hard to migrate the product to .Net 4 and include a workflow framework in Sense/Net built upon Windows Workflow Foundation 4. This brand new feature enables developers to define and develop workflows, and supports users when building and setting up complicated business processes involving content creation and response...thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.11): Features Added a new option that allows properties on data contract types to be marked as virtual. Bug Fixes Fixed a bug caused by certain project properties not being available on Web Service Software Factory projects. Fixed a bug that could result in the WrapperName value of the MessageContractAttribute being incorrect when the Adjust Casing option is used. The menu item code now caters for CommandBar instances that are not available. For example the Web Item CommandBar does not exist ...AllNewsManager.NET: AllNewsManager.NET 1.3: AllNewsManager.NET 1.3. This new version provide several new features, improvements and bug fixes. Some new features: Online Users. Avatars. Copy function (to create a new article from another one). SEO improvements (friendly urls). New admin buttons. And more...Facebook Graph Toolkit: Facebook Graph Toolkit 0.8: Version 0.8 (15 Feb 2011)moved to Beta stage publish photo feature "email" field of User object added new Graph Api object: Group, Event new Graph Api connection: likes, groups, eventsDJME - The jQuery extensions for ASP.NET MVC: DJME2 -The jQuery extensions for ASP.NET MVC beta2: The source code and runtime library for DJME2. For more product info you can goto http://www.dotnetage.com/djme.html What is new ?The Grid extension added The ModelBinder added which helping you create Bindable data Action. The DnaFor() control factory added that enabled Model bindable extensions. Enhance the ListBox , ComboBox data binding.New ProjectsAuto-mobile Forum: Automobileforum is a website regarding cars where users can share their ideas about cars and whatever information they want about a car they can get that from this website. The language used for development of this website is ASP.NET Azure Storage Samples: Azure Storage Samples are Windows Azure Storage code samples. They show how to perform every Windows Azure Storage operation for blobs, queues, and tables. There are 2 identical implementations, one using REST and the other using the .NET Storage Client Library (both in C#).b1234: b1234 is b1234betrayal: bahothBungee Jumper: This is a community website for a sport known as Bungee Jumping which is one of the most dangerous adventure sport. Chiave File Encryption: Application for file encryption and decryption using 512 Bit Rijndael encryption algorithm with simple UI to use. Its written in C# and compiled in .Net version 3.5, it incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass.DecEncIT: DecEncIT rend l’expérience de l’utilisateur commun avec la cryptographie (cryptage) plus aisée. Pas de soucies pour les réglages et les termes techniques, vagues et difficiles à comprendre ; inclus également une option pour automatiser toutes les tâches recommandées. VB.Net/3.5ELearningTutorial: A list of tutorialsExtjs Practice - Open our extj journey: ????extjs 1.demo ?? 2.partice ?? 3.work ??Functional: This class library for .NET 3.5 and 4.0 provides extension methods for delegates, to help with functional style programming.HauntedNetworking: HauntedNetworking is a site where people can share their paranormal experiences and related photographs. They can also share pictures of haunted and scary places across the world.KharaSoft Nexus: Nexus is the last inbox you'll ever need.Kojax: kojax projectLEAD - Learn Execute And Develop: It’s a portal to share your knowledge, learn from blogs, wikis, post your questions in query board, learn new things under video training, Execute and develop these, helping gaining knowledge.Luac For 5.1 ????: luac??????? Lunar Lander: Simple WPF Lunar Lander simulator, with reactive intelligent agent autopilot.MasterSchool: Tworzenie swiadectwmovies blogspot: The Movies blog- spot is an interactive site for the information related to movies, actors, personnel and fictional characters features. MvcTemplates: MvcTemplates is a project devoted to creating a complete suite of Razor Editor and Display templates that utilize attributes, built in models, and jQuery. It is developed in C# and packages the templates into an easy to use dll.MySearcher: Quickly and easily search the major search engines with MySearcher. No clicking around, no switching windows; No matter how many windows you have open, or what you're doing, Searching the major search engines is as simple as moving your mouse to the side of the screen!Photography in Kerala: Photography in Kerala's a website which makes easier for people who are into naturistic Photography to share their photos and other Info of Kerala(also called God's own country).You'll no longer have to search about Kerala,Or share your own info/pics here.Developed in ASP.Net.Pixels(Storage Engine): Pixels(Storage Engine)PortsManager: C# Project to Simplify working with Ports Currnetly it's working with COMs in Receive only and soon it'll be updated to work in send and expand the library to handle send informations even in network.Public Web Browser: Public Web Browser is a webbrowser client with alot of featuresRealStatus - MS Communicator 2007 & A Traffic Light Indicator Hardware: A demonstration of a traffic light indicator hardware corresponding to status events of MS Communicator in real-time. A demo video can be found here: http://www.youtube.com/watch?v=GwgfSVeXVksSABnzbdNet: A SABnzbd monitor and administrator. Built in C# WPF.Sendill: Kerfi fyrir sendibílastöðvarT4 Metadata and Data Annotations Template: This T4 template handles generating metadata classes from an Entity Framework 4 model and decorates primitive and navigation properties with data annotation attributes such as [Required] and [StringLength]. The [DataType] attribute is also applied when appropriate.urlshorten service: urlshorten service need storoom @ http://storoom.codeplex.com/ developed in vb.netuVersionClientCache: uVersionClientCache is a custom macro to always automatically version (URL querstring parameter) your files based on a MD5 hash of the file contents or file last modified date to prevent issues with client browsers caching an old file after you have changed it.Video Game Network Library: A lightweight network library written in C++ for video games.View Layout Replicator for Microsoft Dynamics CRM 2011: View Layout Replicator make it easier for Microsoft Dynamics CRM 2011 customizers to copying the layout of a view and paste it to the layout of other views in the same entityVLC MakeIT: This is virtual learning center yogaasana4you: The website is on Yoga Asana for the community who want to feel the amazing experience of Yoga asana - Yoga Exercises and Postures.

    Read the article

  • The OLE DB provider "SQLNCLI10.1" has not been registered.; 42000.

    - by lankylad
    I have a SQL Server 2008 Analysis Services Project. In the Data Source View I have a Named Query which references a single Data Source containing three tables. The Project processes successfully and the cube can be browsed. I recently added a second Data Source to the Data Source View and linked a table to the original Named Query. When I try to process the project, I get the message: OLE DB error: OLE DB or ODBC error: The OLE DB provider "SQLNCLI10.1" has not been registered.; 42000. The Connection String for both Data Sources uses SQLNCLI10.1

    Read the article

  • MySQL error: Can't find symbol '_mysql_plugin_interface_version_' in library

    - by Josh Smith
    The boring, necessary details: I'm on Snow Leopard running MySQL locally. I'm trying to install the Sphinx engine for MySQL like so: mysql> install plugin sphinx soname 'sphinx.so'; ERROR 1127 (HY000): Can't find symbol '_mysql_plugin_interface_version_' in library I've Googled everywhere and can't seem to find an actual solution to this problem. For example this issue on the Sphinx forums seems unresolved. Someone else also raised this issue with similar results. The first post linked to an O'Reilly article which says: There is a common problem that might occur at this point: ERROR 1127 (HY000): Can't find symbol '_mysql_plugin_interface_version_' in library If you see a message like this, it is likely that you forgot to include the -DMYSQL_DYNAMIC_PLUGIN option when compiling the plugin. Adding this option to the g++ compile line is required to create a dynamically loadable plug-in. But the article ends on that point; I have no idea what this means or how to resolve the issue.

    Read the article

  • Where do I find scripts generated by SharePoint MCMS Migration Profiles

    - by HipCzeck
    I am attempting to migrate data from an Microsoft Content Management Server (MCMS) 2002 instance into a new Microsoft Office Sharepoint Server (MOSS) 2007 installation using the Manage Microsoft Content Management Server Migration Profiles tool in the Operations space of MOSS Central Administration. When analyzing the profile, I receive 4 warnings, all of which may be safely ignored, but when I actually execute the migration profile, I get the same warnings and an additional error with a description of: Line 6: Incorrect syntax near ';'. I have seen this error numerous times when mucking about in SQL Server and recognize it as a Transact SQL error message, but can't find the actual SQL statement that is being executed so that I may determine the source of the error. EDIT: After enabling verbose logging on the MCMS 2002 Migration category, and poring through the Unified Logging Service (ULS) logs, I received a more complete stack trace at the point of the error, and a couple more anomalies listed below. Anomalies: The following is an abbreviated listing from the ULS logs around the time of the pre-migration analysis. 01 MCMS 2002 Migration Verbose Start ConnectionCheck 02 MCMS 2002 Migration Verbose End ConnectionCheck 03 MCMS 2002 Migration Verbose Start DatabaseCheck 04 MCMS 2002 Migration High Extra table SiteDeployLock will not be migrated 05 MCMS 2002 Migration High Analysis: Extra index PK__SiteDeployLock__05D8E0BE 06 MCMS 2002 Migration Verbose End DatabaseCheck 07 MCMS 2002 Migration Medium Pre-migration analysis: RootCheckTask is skipped because database check is blocked. 08 MCMS 2002 Migration Medium Pre-migration analysis: RightsGroupNameCheckTask is skipped because database check is blocked. 09 MCMS 2002 Migration Medium Pre-migration analysis: InvalidNameCheckTask is skipped because database check is blocked. 10 MCMS 2002 Migration Medium Pre-migration analysis: LeafNameCheckTask is skipped because database check is blocked. 11 MCMS 2002 Migration Medium Pre-migration analysis: LeafLengthCheckTask is skipped because database check is blocked. 12 MCMS 2002 Migration Medium Pre-migration analysis: TemplateNameCheckTask is skipped because database check is blocked. 13 MCMS 2002 Migration Medium Pre-migration analysis: TemplateCollisionCheckTask is skipped because database check is blocked. 14 MCMS 2002 Migration Medium Pre-migration analysis: PlaceholderCheckTask is skipped because database check is blocked. 15 MCMS 2002 Migration Medium Pre-migration analysis: CheckedOutItemsCheckTask is skipped because database check is blocked. 16 MCMS 2002 Migration Medium Pre-migration analysis: SubmittedItemsCheckTask is skipped because database check is blocked. 17 MCMS 2002 Migration Medium Pre-migration analysis: DeletedItemsCheckTask is skipped because database check is blocked. 18 MCMS 2002 Migration Medium Pre-migration analysis: UserCheckTask is skipped because database check is blocked. 19 MCMS 2002 Migration Medium Pre-migration analysis: FileSizeCheckTask is skipped because database check is blocked. 20 MCMS 2002 Migration Medium Pre-migration analysis: HostHeaderMapCheckTask is skipped because database check is blocked. 21 MCMS 2002 Migration Verbose Start Server check 22 MCMS 2002 Migration Verbose End Server check 23 MCMS 2002 Migration Verbose Start Server emptyness check 24 MCMS 2002 Migration Verbose End Server emptyness check 25 MCMS 2002 Migration Medium PreMigrationAnalyzer: Dry run starts 26 MCMS 2002 Migration Verbose CleanLockProcedure: start. 27 MCMS 2002 Migration High CleanLockProcedure: connection system lock is null 28 MCMS 2002 Migration Verbose Finished all tasks 29 MCMS 2002 Migration High PreMigrationAnalyzer ends with True 30 MCMS 2002 Migration Verbose Migration profile status is changed to AnalysisPassed Specifically, the two High level alerts on lines 4 and 5 are reflected in the migration report as warnings when running Pre-migration Analysis or running the migration profile. In addition, two other warnings appear in the migration report indicating two tables containing data (LayoutProperty and NodeLayout) that should be empty. According to the documentation, warnings are not sufficient cause to stop migration from occurring. Other anomalies are on lines 7-20 indicating a series of tests that are skipped because database check is blocked. The ULS doesn't give any additional warnings to indicate that the database check was blocked or exited in exceptional circumstances. After switching the profile from pre-migration analysis to exporting, there is one medium level warning that LastChangeTime is not set or incorrect. (null). As with all the skipped test names and SQL table names from the warnings, the major search engines are unable (with the exception of LayoutProperty) to find any reference to these objects or tests. Finally, the section of the log indicating the actual live migration attempt is appended below: 01 MCMS 2002 Migration Medium LastChangeTime is not set or incorrect. (null) 02 MCMS 2002 Migration Verbose Set export lock 03 MCMS 2002 Migration Verbose CleanLockProcedure: start. 04 MCMS 2002 Migration Verbose CleanLockProcedure: end. 05 MCMS 2002 Migration Verbose Prepare for export 06 MCMS 2002 Migration Verbose Open connection... 07 MCMS 2002 Migration Verbose Create temporary stored procedures 08 MCMS 2002 Migration Verbose Create temporary tables... 09 MCMS 2002 Migration Verbose Initialize temporary tables... 10 MCMS 2002 Migration Verbose InitializeTemporaryTables: start 11 MCMS 2002 Migration Verbose Initialize export table... 12 MCMS 2002 Migration Verbose InitializeExportTable: start 13 MCMS 2002 Migration Verbose CleanLockProcedure: start. 14 MCMS 2002 Migration Verbose CleanLockProcedure: end. 15 MCMS 2002 Migration High Migration throws exception: Line 6: Incorrect syntax near ';'.. Stacktrace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SharePoint.Publishing.Internal.Administration... 16 MCMS 2002 Migration High ....MigrationBatchCommand.ExecuteImmediate(String command) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationBatchCommand.ExecuteWaitingCommands() at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDBSerializer.SerializeSelectedExportObject(StringCollection objectAttribs) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess.InitializeExportTable(ScopeType scopeType) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess.InitializeTemporaryTables(DateTime lastChangeTime) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess.InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis, SqlConnection connection) at Microsoft.SharePoint.Publishing.Internal.Admin... 17 MCMS 2002 Migration High ...stration.MigrationDataAccess.InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis) at Microsoft.SharePoint.Publishing.Administration.ContentMigration.Export(MigrationDataAccess dataAccess) at Microsoft.SharePoint.Publishing.Administration.ContentMigration.MigrateInternal(). 18 MCMS 2002 Migration Verbose MigrationProfile: GetInstance. Start. 19 MCMS 2002 Migration Verbose MigrationProfile: GetInstance. End. 20 MCMS 2002 Migration Verbose Migration profile status is changed to Failed The stack trace of the failed parsing of the SQL command appear on lines 15-17. A cleaner version of the stack trace is appended below. Full Stack Trace: Migration throws exception: Line 6: Incorrect syntax near ';'.. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning( TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationBatchCommand .ExecuteImmediate(String command) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationBatchCommand .ExecuteWaitingCommands() at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDBSerializer .SerializeSelectedExportObject(StringCollection objectAttribs) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeExportTable(ScopeType scopeType) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeTemporaryTables(DateTime lastChangeTime) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis, SqlConnection connection) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis) at Microsoft.SharePoint.Publishing.Administration.ContentMigration.Export (MigrationDataAccess dataAccess) at Microsoft.SharePoint.Publishing.Administration.ContentMigration .MigrateInternal(). None of this log information indicates the SQL command that is failing a parser check. I've checked the SQL servers hosting the source and destination databases for a trace of the query, but neither seems to have triggered the parse failure condition. That appears to have happened on the SharePoint server. Are there any other locations I should investigate that might tell me where to find the source of the error?

    Read the article

  • Can the Dell PowerEdge T320 boot from a USB stick in hard drive emulation mode?

    - by Icydog
    I am trying to install FreeNAS on a Dell PowerEdge T320. I've followed the instructions here (http://doc.freenas.org/index.php/Burning_an_IMG_File) to write the .img file to four different USB sticks with dd if=FreeNAS-9.2.1.5-RELEASE-x64.img of=/dev/sde, but I can't boot off of any of them. When I try to boot, I get the following screen: F1 FreeBSD F2 FreeBSD F3 Drive 0 Then it auto-selects F1 and either prints # about once a second forever, or it sits with a blinking cursor doing nothing. Forum posts and the wiki page linked above say to set the USB boot emulation mode to hard disk (as opposed to auto or CD/DVD/floppy), and older Dell PowerEdge models including the T310 have this option in the BIOS settings, but I cannot find it on the T320 with the latest BIOS 2.1.2. I have even tried writing the USB image to a USB hard disk, but with the same lack of success. Has anyone been able to find this USB boot setting on a T320, or been able to boot FreeNAS/FreeBSD from USB in some other way on a T320?

    Read the article

  • The OLE DB provider "SQLNCLI10.1" has not been registered.; 42000

    - by lankylad
    I have a SQL Server 2008 Analysis Services Project. In the Data Source View I have a Named Query which references a single Data Source containing three tables. The Project processes successfully and the cube can be browsed. I recently added a second Data Source to the Data Source View and linked a table to the original Named Query. When I try to process the project, I get the message: OLE DB error: OLE DB or ODBC error: The OLE DB provider "SQLNCLI10.1" has not been registered.; 42000. The Connection String for both Data Sources uses SQLNCLI10.1

    Read the article

  • Problems linking to social networks in Windows 8

    - by Andrew Cooper
    I've upgraded my laptop to Windows 8 (from Windows 7) and I'm having problems with getting information to show in the People and Messaging apps. I've linked my Facebook, Twitter and LinkedIn accounts to my Live Id, and on Windows 7 I was able to see my Friends' facebook activity in Windows Live Messenger. In the Windows 8 People app I can see all my contacts from Facebook, Twitter and LinkedIn, and I can see the on-line status of at least my Facebook contacts. I can also see the profiles details of each contact, but I don't get anything in the "What's New" view. The Messaging app is just blank. I assume I should be able to send messages to my contacts, but I can't see any way to do it. Am I missing something?

    Read the article

  • Coreinfo reports i7-3770 CPU doesn't support EPT (aka SLAT) and VMX

    - by Toro
    Here is the output: Coreinfo v3.05 - Dump information on system CPU and memory topology Copyright (C) 2008-2012 Mark Russinovich Sysinternals - www.sysinternals.com Note: Coreinfo must be executed on a system without a hypervisor running for accurate results. Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz Intel64 Family 6 Model 58 Stepping 9, GenuineIntel HYPERVISOR * Hypervisor is present VMX - Supports Intel hardware-assisted virtualization EPT - Supports Intel extended page tables (SLAT) According to Intel the CPU does have a VMX and EPT support.

    Read the article

  • Chrome: Saved username/password filled in incognito mode

    - by Wouter Coekaerts
    If I open an incognito window in Google Chrome and go to a webpage where Chrome has a saved username and password from (for example the login form on http://gmail.com), I see that my username and password are automatically filled in. Does that mean that I am not really incognito? Can the website see my username even if I don't explicitly log in? Or is there some mechanism behind the scenes that prevents the webpage from grabbing auto-filled values unless I actually log in? Clarification: Stored usernames (and passwords) are a lot like cookies: your unique identifier linked to a certain site, stored locally in your browser, available to the site when you open it. When you go incognito you ask your browser not to identify you to the sites you visit. It does that by (among other things) not exposing its cookies. Exposing the stored username in this mode does not make sense to me (but maybe I'm missing something...).

    Read the article

  • Networked Parallel Port in Linux / KVM / QEMU

    - by korkman
    What I want to use is the "-parallel" tcp or udp option from KVM / QEMU, but I don't seem to find any server for this client. I don't serve a printer but a hardware dongle. I checked ser2net, which does provide "/dev/lp0" sharing, but it doesn't seem to work for KVM / QEMU. I suspect KVM / QEMU requires "/dev/parport0". I did rmmod lp, modprobe ppdev, linked ser2net to parport0, but it didn't work out. Perhaps ser2net is not suited for this. I tried socat as well, and I tried netcat. No success. Does anyone know any KVM / QEMU compatible parallel port server? Or did any of netcat, socat or ser2net work for you?

    Read the article

  • Immutable hard links on ext3/4?

    - by shovas
    In my research on file versioning at the fs level, snapshotting, and related ideas, I took a look at hard-links and exactly what they are and how they behave. Using rsync you can get a pretty slick poor man's snapshotting system up and running on file systems that don't natively support it. But, can you get immutable hard links on ext3/4 or any other file systems for that matter? My definition for immutable hard link is: A hard link which, when changed on one location, becomes a regular copy and no longer a hard link. I would like this because it would enable snapshotting use of the source data to link against instead of a copy of the data (in the case of the rsync snapshotting technique). I have gigabytes of data that can't be duplicated due to space restrictions but I have enough room if I can intelligently snapshot individual changed files with the rest linked to the source not a copy. Given all that, is there some other technique, feature or technology I'm really looking for?

    Read the article

  • Sorting in Pivot Table on how data is summarized, not just the value

    - by user26453
    Often I am creating pivot tables that summarize some count by some category. Let's say I am counting Yes/No responses by some category. I usually add the count field and display it as a "% of row", and then create a pivot chart. However, if I want to sort one of the columns, say "Yes", Excel sorts by the underlying count, not the calculated percentage. Any way around this?

    Read the article

  • Mono Project: How to install Mono framework on Red Hat Linux which is compiled on centOS ?

    - by funwithcoding
    We have Red Hat Enterprise Linux servers at work place. However we dont have Red Hat Linux desktops. So I used CentOS 5.4 to compile the Mono sources and generated the Mono framework for CentOS and tested with some sample codes and I am satisfied. I want to transfer this compiled framework to Red Hat Enterprise Linux 5. How Can I do that? Do I have to compile the Mono framework statically or do I have to copy the linked libraries as well? I am not familiar with linux much. Any help is highly appreciated.

    Read the article

  • Mono Project: How to install Mono framework on Red Hat Linux which is compiled on centOS ?

    - by funwithcoding
    We have Red Hat Enterprise Linux servers at work place. However we dont have Red Hat Linux desktops. So I used CentOS 5.4 to compile the Mono sources and generated the Mono framework for CentOS and tested with some sample codes and I am satisfied. I want to transfer this compiled framework to Red Hat Enterprise Linux 5. How Can I do that? Do I have to compile the Mono framework statically or do I have to copy the linked libraries as well? I am not familiar with linux much. Any help is highly appreciated.

    Read the article

  • Nginx Reverse proxy + SYN Flood

    - by Bradley
    We're running a nginx reverse proxy cluster, forwarding traffic to our main website, this enables us to filter out unwanted traffic/users etc, and send them off else where, now we have a few issues with SYN floods where the requests a second is overflowing the proxy + the main server causing them to become unavailable. Is there any ip tables magic that can A) Rate limit SYN packets / connections to HTTP B) Block it all together if packets a second is malicious or any advice how to use limit_rate_zone in nginx, I've googled and tried to apply a few new results and none of them work and the websites are still unavailable.

    Read the article

  • Django Dying on Shared Hosting Environment (Too Many MySQL Connections)

    - by Tom
    I've had a Django site up and running on HostGator (client requirement), following these instructions, for a few weeks now. I had seen two error emails about pages dying with (1040: Too many MySQL connections) but had never been able to recreate the problem. As of today, the site is completely unresponsive and all pages, even the static files, are dying with that error. Two questions: What can I do to fix this (other than caching more stuff)? Why would static files be dying like that? I can request them directly without a problem, so how are they getting run through Django? The shared hosting setup doesn't allow for a <Location> block, but there's a flag in the rewrite rule that says only requests for files that don't exist in the filesystem should be processed. All of my static files exist on the system, though they are symbolically linked files if it matters.

    Read the article

  • Bind9 as a caching resolver fails with mismatch ID on localhost but not external IP

    - by argibbs
    I'm running Ubuntu 12.04 LTS on a machine on my private network. I have bind9 installed (v9.8.1-P1) via aptitude, so it appears to have put all the bits in the right places and the service starts automatically. I plan on adding some zones later, but first I'm just trying to get it working as a caching resolver. I installed bind, configured it, and starting using it. Initially I thought it was working ok, but then I found some sites weren't being resolved. I've pinned it down to being linked to the size of the result and bind failing-over to TCP mode. So: I'm trying to find out why bind is failing when I query for domain info and the result is 512 bytes (causing a truncation and retry on TCP). Specifically it fails with ID mismatches if I point dig at localhost, but works when I query the machine's own IP (192.168.0.2). This appears to be backwards to the problem that most people have when using bind (fails on external ip, works on localhost). If I do dig @localhost google.com (which has a response of <512 bytes) then it works; I get no warnings, and plenty of output. $ dig @localhost google.com ; <<>> DiG 9.8.1-P1 <<>> @localhost google.com [snip lots of output] ;; Query time: 39 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 17 23:08:34 2013 ;; MSG SIZE rcvd: 495 If I do dig @localhost play.google.com (which has a larger response) then I get back something like: $ dig @localhost play.google.com ;; Truncated, retrying in TCP mode. ;; ERROR: ID mismatch: expected ID 3696, got 27130 This seems to be standard, documented behaviour - when the UDP response is large (here 'large' == 512 bytes) it falls back to TCP. The ID mismatch is not expected though. If I do dig @192.168.0.2 play.google.com then I still get the warning about using TCP mode, but it otherwise works $ dig @192.168.0.2 play.google.com ;; Truncated, retrying in TCP mode. ; <<>> DiG 9.8.1-P1 <<>> @192.168.0.2 play.google.com [snip most of the output] ;; Query time: 5 msec ;; SERVER: 192.168.0.2#53(192.168.0.2) ;; WHEN: Thu Oct 17 23:05:55 2013 ;; MSG SIZE rcvd: 521 At the moment I've not set up any zones in my local instance, so it's just acting as a caching resolver. My options config is pretty much unchanged from standard, I've got the following set: options { directory "/var/cache/bind"; allow-query { 192.168/16; 127.0.0.1; }; forwarders { 8.8.8.8; 8.8.4.4; }; dnssec-validation auto; edns-udp-size 4096 ; allow-transfer { any; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; And my /etc/resolv.conf is just nameserver 127.0.0.1 search .local The problem definitely seems linked to the failover to TCP mode: if I do dig +bufsize=4096 @localhost play.google.com then it works; no warning about failover to TCP, no ID mismatch, and a standard looking result. To be honest, if there was a way to force bind to use a much larger UDP buffer, that'd probably be good enough for me, but all I've been able to find mention of is max-udp-size 4096 and that doesn't change the behaviour in any way. I've also tried setting edns-udp-size 512 in case the problem is some weird EDNS issue with my router (which seems unlikely since the +bufsize=4096 flag works fine). I've also tried dig +trace @localhost play.google.com; this works. No truncation/TCP warning, and a full result. I've also tried changing the servers used in the forwarder (e.g. to OpenDNS), but that makes no difference. There's one last data point: if I repetitively do dig @localhost play.google.com I don't always get an ID mismatch, but sometimes a REFUSED error. I'm much more likely to get a REFUSED error if I dig the non-localhost IP (192.168.0.2) first: $ dig @localhost play.google.com ;; Truncated, retrying in TCP mode. ; <<>> DiG 9.8.1-P1 <<>> @localhost play.google.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 35104 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;play.google.com. IN A ;; Query time: 4 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 17 23:20:13 2013 ;; MSG SIZE rcvd: 33 Any insights or things to try would be much appreciated.

    Read the article

  • Multiple Windows Messenger Users Windows 8

    - by Kurt Woods
    I am trying out Windows 8 to see if I want to use it on my main computer. In Windows 8 I have my domain user account linked to my personal outook.com Live account. This is great for everything except Windows Live Messenger. I need to have my domain messenger account accessible. Is there a way to sign into multiple Messenger accounts on Windows 8, or at least sign into a different account without affecting every other app? Thanks! Kurt

    Read the article

< Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >