Search Results

Search found 36186 results on 1448 pages for 'sql 11'.

Page 262/1448 | < Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >

  • Performance impact when using XML columns in a table with MS SQL 2008

    - by Sam Dahan
    I am using a simple table with 6 columns, 3 of which are of XML type, not schema-constrained. When the table reaches a size around 120,000 or 150,000 rows, I see a dramatic performance cost in doing any query in the table. For comparison, I have another table, which grows in size at about the same rate, but only contain scalar types (int, datetime, a few float columns). That table performs perfectly fine even after 200,000 rows. And by the way, I am not using XQuery on the xml columns, i am only using regular SQL query statements. Some specifics: both tables contain a DateTime field called SampleTime. a statement like (it's in a stored procedure but I show you the actual statement) SELECT MAX(sampleTime) SampleTime FROM dbo.MyRecords WHERE PlacementID=@somenumber takes 0 seconds on the table without xml columns, and anything from 13 to 20 seconds on the table with XML columns. That depends on which drive I set my database on. At the moment it sits on a different spindle (not C:) and it takes 13 seconds. Has anyone seen this behavior before, or have any hint at what I am doing wrong? I tried this with SQL 2008 EXPRESS and the full-blown SQL Server 2008, that made no difference. Oh, one last detail: I am doing this from a C# application, .NET 3.5, using SqlConnection, SqlReader, etc.. I'd appreciate some insight into that, thanks! Sam

    Read the article

  • SSIS Transaction with Sql Transaction

    - by Mike
    I started with a package to make sure Transactions are working correctly. The package level transaction is set to Required. I have two Execute Sql Task, one deletes rows from a table and one does 1/0, to throw the error. Both task are set to supported transaction level and Serializable IsolationLevel. That works. Now when I replace my two sql task to two separate procedure calls, the first one, ChargeInterest, runs successful but the second one, PaymentProcess, fails always saying. [Execute SQL Task] Error: Executing the query "Exec [proc_xx_NotesReceivable_PaymentProcess] ..." failed with the following error: "Uncommittable transaction is detected at the end of the batch. The transaction is rolled back.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. PaymentProcess being the second stored procedure. Both procedures have there own BEGIN, COMMIT AND ROLLBACKS inside the SP. I believe that the transactions are being successfully handed in the Charge Interest because I can run the following without issues or the dreaded you started with 0 and now have 1 transaction. EXEC [proc_XX_NotesReceivable_ChargeInterest] 'NR', 'M', 186, 300 EXEC [proc_XX_NotesReceivable_PaymentProcess] 'NR', 186, 300 --OR GO BEGIN TRAN EXEC [proc_XX_NotesReceivable_ChargeInterest] 'NR', 'M', 186, 300 EXEC [proc_XX_NotesReceivable_PaymentProcess] 'NR', 186, 300 ROLLBACK TRAN Now I have noticed that DTC does get kicked off in both instances? Why I am not sure because it is using the same connection. In the live example I can see the transaction get started but disappears if I put a breakpoint on the PreExecute event of the second stored procedure. What is the correct way to mingle SP transactions with SSIS transactions?

    Read the article

  • Using Parameter Values In SQL Statement

    - by Dangerous
    I am trying to write a database script (SQL SERVER 2008) which will copy information from database tables on one server to corresponding tables in another database on a different server. I have read that the correct way to do this is to use a sql statement in a format similar to the following: INSERT INTO <linked_server>.<database>.<owner>.<table_name> SELECT * FROM <linked_server>.<database>.<owner>.<table_name> As there will be several tables being copied, I would like to declare variables at the top of the script to allow the user to specify the names of each server and database that are to be used. These could then be used throughout the script. However, I am not sure how to use the variable values in the actual SQL statements. What I want to achieve is something like the following: DECLARE @SERVER_FROM AS NVARCHAR(50) = 'ServerFrom' DECLARE @DATABASE_FROM AS NVARCHAR(50) = 'DatabaseTo' DECLARE @SERVER_TO AS NVARCHAR(50) = 'ServerTo' DECLARE @DATABASE_TO AS NVARCHAR(50) = 'DatabaseTo' INSERT INTO @SERVER_TO.@DATABASE_TO.dbo.TableName SELECT * FROM @SERVER_FROM.@DATABASE_FROM.dbo.TableName ... How should I use the @ variables in this code in order for it to work correctly? Additionally, do you think my method above is correct for what I am trying to achieve and should I be using NVARCHAR(50) as my variable type or something else? Thanks

    Read the article

  • Linq to SQL code generator features

    - by Anders Abel
    I'm very fond of Linq to SQL and the programming model it encourages. I think that in many cases when you are in control of both the database schema and the code it is not worth the effort to have different relational and object models for the data. Working with Linq to SQL makes it simple to have type safe data access from .NET, using the partial extension methods to implement business rules. Unfortunately I do not like the dbml designer due to the lack of a schema refresh function. So far I have used SqlMetal, but that lacks the customization options of the dbml designer. Because of that I've started working on a tool which regenerates the whole code file like SqlMetal, but has the ability to do the customizations that are available in the dbml designer (and maybe more in the future). The customizations will be described in an xml file which only contains those parts that shouldn't have default values. This should keep the xml file size down as well as the maintenance burden of it. To help me focus on the right features, I would like to know: What would be your favourite feature in a linq to sql code generator?

    Read the article

  • Choose a XML node in SQL Server based on max value of a child element

    - by Jay
    I am trying to select from SQL Server 2005 XML datatype some values based on the max data that is located in a child node. I have multiple rows with XML similar to the following stored in a field in SQL Server: <user> <name>Joe</name> <token> <id>ABC123</id> <endDate>2013-06-16 18:48:50.111</endDate> </token> <token> <id>XYX456</id> <endDate>2014-01-01 18:48:50.111</endDate> </token> </user> I want to perform a select from this XML column where it determines the max date within the token element and would return the datarows similar to the result below for each record: Joe XYZ456 2014-01-01 18:48:50.111 I have tried to find a max function for xpath that would all me to select the correct token element but I couldn't find one that would work. I also tried to use the SQL MAX function but I wasn't able to get it working with that method either. If I only have a single token it of course works fine but when I have more than one I get a NULL, most likely because the query doesn't know which date to pull. I was hoping there would be a way to specify a where clause [max(endDate)] on the token element but haven't found a way to do that. Here is an example of the one that works when I only have a single token: SELECT XMLCOL.query('user/name').value('.','NVARCHAR(20)') as name XMLCOL.query('user/token/id').value('.','NVARCHAR(20)') as id XMLCOL.query('user/token/endDate').value(,'xs:datetime(.)','DATETIME') as endDate FROM MYTABLE

    Read the article

  • SQL syntax problem (multiple selects)

    - by user279521
    I am having problems retrieving accurate data values with my stored proc query below: CREATE PROCEDURE usp_InvoiceErrorLog @RecID int AS DECLARE @ErrorString as varchar(1000), @ErrorCode as int; Select @ErrorCode = ErrorCode from tbl_AcctRecv_WebRpt Where RecID = @RecID; IF NOT(@ErrorCode = NULL) Begin Select @ErrorString = ErrorDesc from tbl_ErrDesc Where ErrorCode = @ErrorCode End Select RecID, VendorNum, VendorName, InvNum, InvTotal, (SELECT CONVERT(VARCHAR(11), InvDate, 106) AS [DD MON YYYY]) As InvDate, TicketRequestor, ErrorCode, @ErrorString as ErrorDesc from tbl_AcctRecv_WebRpt Where RecID = @RecID The ErrorDesc column (in the final select statement at the bottom) returns a NULL value, when it should return a valid string data. Any ideas?

    Read the article

  • SQL string manipulation to return multiple rows

    - by Andy Jacobs
    I'm an experienced programmer, but relatively new to SQL. We're using Oracle 10 and 11. I have a system in place using SQL that combines actual rows with virtual rows (e.g. "SELECT 1 from DUAL") doing unions and intersects as needed, which all seems to work. My problem is that I need to combine this system which is expecting rows of data, with new data that will have the data in (let's say for simplification) comma delimited strings. So I think what I need is a way to convert a string like: "5,6,7,8" into 4 rows with one column each, with "5" in the first row, "6" in the second, etc. In other languages, I'd do a "Split" with comma as the delimiter. Of course, the data won't always have 4 entries. There's a second question, but I'll ask it separately. But I suspect it will simplify things, if possible, if the solution to the above could be used as a table in another SQL statement (i.e., to work with my existing system). Thanks for any help.

    Read the article

  • Performance of VIEW vs. SQL statement

    - by Matt W.
    I have a query that goes something like the following: select <field list> from <table list> where <join conditions> and <condition list> and PrimaryKey in (select PrimaryKey from <table list> where <join list> and <condition list>) and PrimaryKey not in (select PrimaryKey from <table list> where <join list> and <condition list>) The sub-select queries both have multiple sub-select queries of their own that I'm not showing so as not to clutter the statement. One of the developers on my team thinks a view would be better. I disagree in that the SQL statement uses variables passed in by the program (based on the user's login Id). Are there any hard and fast rules on when a view should be used vs. using a SQL statement? What kind of performance gain issues are there in running SQL statements on their own against regular tables vs. against views. (Note that all the joins / where conditions are against indexed columns, so that shouldn't be an issue.) EDIT for clarification... Here's the query I'm working with: select obj_id from object where obj_id in( (select distinct(sec_id) from security where sec_type_id = 494 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) and sec_obj_id in ( select obj_id from object where obj_ot_id in (select of_ot_id from obj_form left outer join obj_type on ot_id = of_ot_id where ot_app_id = 87 and of_id in (select sec_obj_id from security where sec_type_id = 493 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) ) and of_usage_type_id = 131 ) ) ) ) or (obj_ot_id in (select of_ot_id from obj_form left outer join obj_type on ot_id = of_ot_id where ot_app_id = 87 and of_id in (select sec_obj_id from security where sec_type_id = 493 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) ) and of_usage_type_id = 131 ) and obj_id not in (select sec_obj_id from security where sec_type_id = 494) )

    Read the article

  • Which SQL statements to execute with intersection / junction tables

    - by user1455103
    Here a simplified database layout One condo can hold multiple properties (flats, garage boxes, etc) - 1->n relationship One owner can have multiple properties in the same condo and properties can have more than one owner (m->n changed to 1->n with the junction table) One condo can have multiple owners - 1->n Some additional clarification: A owner is a member of a condo. A condo is made of properties belonging to owners BUT a owner is not linked to a property directly (there can be no relation between a property and a owner for a certain time BUT there will ALWAYS be a relation between a owner and a condo). Reason for this: the agent managing the condo will first create a list of owners and a list of properties. It is only later thet he will "link" each property to one or multiple owners (or inversely) I'm quite new to SQL. What SQL statements should I execute to: SELECT, for a specific condo (WHERE condition), the properties and their respective owners (all properties should be listed even if owners are null) SELECT, for a specific condo (WHERE condition), the owners along with their properties (all owners should be listed even if properties are null) UPDATE / DELETE existing owners (I'm uncertain about how to handle the operation for the junction tables. Should I first check if there is an entry in the junction table or not ?) UPDATE / DELETE existing properties (same concern) INSERT new owners (should I use two different SQL statements depending if the owner should be linked to a property or NOT - IF condition ?) INSERT new properties (same question as above) Could you be as clear and generic as possible so that it can be reused ? :-)

    Read the article

  • Is it possible to definitively identify whether a DML command was issued from a stored procedure?

    - by Ed Harper
    I have inherited a SQL Server 2008 database to which calling applications have access through stored procedures. Each table in the database has a shadow audit table into which Insert/Update/Delete operations for are logged. Performance testing on populating the audit tables showed that inserting the audit records using OUTPUT clauses was 20% or so faster than using triggers, so this has been implemented in the stored procedures. However, because this design cannot track changes made directly to the tables through DML statements issued directly against the tables, triggers have also been implemented which use the value of @@NESTLEVEL to determine whether or not to run the trigger (the assumption being that all DML run through stored procedures will have @@NESTLEVEL 1). i.e. the body of the trigger code looks something like: IF @@NESTLEVEL = 1 -- implies call is direct sql so generate history from here BEGIN ... insert into audit table This design is flawed because it won't track updates where DML statements are executed in dynamic SQL, or any other context where @@NESTLEVEL is raised above 1. Can anyone suggest a completely reliable method we can use in the triggers to execute them only if not triggered by a stored procedure? Or is this (as I suspect) not possible?

    Read the article

  • Invalid SQL Query

    - by svovaf
    I have the next query that in my opinion is a valid one, but I keep getting error telling me that there is a proble on "WHERE em.p4 = ue.p3" - Unknown column 'ue.p3' in 'where clause'. This is the query: SELECT DISTINCT ue.p3 FROM table1 AS ue INNER JOIN table2 AS e ON ue.p3 = e.p3 WHERE EXISTS( SELECT 1 FROM ( SELECT (COUNT(*) >= 1) AS MinMutual FROM table4 AS smm WHERE smm.p1 IN ( SELECT sem.p3 FROM table3 AS sem INNER JOIN table2 AS em ON sem.p3 = em.p3 WHERE em.p4 = ue.p3 AND sem.type = 'friends' AND em.p2 = 'normal' ) AND smm.p5 IN ( 15000,15151 ) ) AS Mutual WHERE Mutual.MinMutual = TRUE) LIMIT 11 If I execute the sub-query which is inside the EXISTS function, everything is O.K. PLEASE HELP!

    Read the article

  • SQL Server log backups "stalling"

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • SQL Query: Using Cursors

    - by user2953138
    I need some directions for SQL Server & Cursors: I have a table named Order: OrderID Item Amount 1 A 10 1 B 1 2 A 5 2 C 4 2 D 21 3 B 11 I have a second table named Storage: Item Amount A 40 B 44 C 20 D 1 For every OrderID, I want to check if enough items are available. If not, I want to return an error message. How can this be done with Cursors at all? Are nested cursors the solution to this? My main issue is to understand how I can fetch the OrderID as actual "Group" of ID=1, 2, 3 etc. instead of line by line

    Read the article

  • HD Crash SQL server -> DBCC - consistency errors in table 'sysindexes'

    - by Julian de Wit
    Hello A client of mine has had an HD crash an a SQL DB got corrupt : They did not make backups so they have a big problem. When I tried (an ultimate measure) to DBCC-repair I got the following message. Can anybody help me with this ? Server: Msg 8966, Level 16, State 1, Line 1 Could not read and latch page (1:872) with latch type SH. sysindexes failed. Server: Msg 8944, Level 16, State 1, Line 1 Table error: Object ID 2, index ID 0, page (1:872), row 11. Test (columnOffsets->IsComplex (varColumnNumber) && (ColumnId == COLID_HYDRA_TEXTPTR || ColumnId == COLID_INROW_ROOT || ColumnId == COLID_BACKPTR)) failed. Values are 2 and 5. The repair level on the DBCC statement caused this repair to be bypassed. CHECKTABLE found 0 allocation errors and 1 consistency errors in table 'sysindexes' (object ID 2). DBCC execution completed. If DBCC printed error messages, contact your system administrator.

    Read the article

  • psqlODBC won't load after installing MS SQL ODBC driver on RHEL 6

    - by Kapil Vyas
    I had the PostgreSQL drivers working on my RHEL 6. But after I installed Microsoft® SQL Server® ODBC Driver 1.0 for Linux I can no longer connect to PosgreSQL data sources. I can connect to SQL Server data sources fine. When I had this same issue a week ago I uninstalled MS SQL Server ODBC driver from Linux and it fixed the issue. I had to copy the psqlodbcw.so files from another machine to replenish the files. I don't want to do the same this time. I want both drivers to work on Linux. This time around the setup files got deleted: /usr/lib64/libodbcpsqlS.so. Replenishing it did not fix the issue. I kept getting the following error in spite of the file being present with rwx permisions: [root@localhost lib64]# isql -v STUDENT dsname pwd12345 [01000][unixODBC][Driver Manager]Can't open lib '/usr/lib64/psqlodbc.so' : file not found [ISQL]ERROR: Could not SQLConnect [root@localhost lib64]# Here is a printout of the file permissions: [root@localhost lib64]# ls -al p*.so lrwxrwxrwx. 1 root root 12 Dec 7 09:15 psqlodbc.so -> psqlodbcw.so -rwxr-xr-x. 1 root root 519496 Dec 7 09:35 psqlodbcw.so and my odbcinst.ini file looks as follows: [PostgreSQL] Description=ODBC for PostgreSQL Driver=/usr/lib/psqlodbc.so Driver64=/usr/lib64/psqlodbc.so Setup=/usr/lib/libodbcpsqlS.so Setup64=/usr/lib64/libodbcpsqlS.so FileUsage=1 UsageCount=4 I also referred to this link: http://mailman.unixodbc.org/pipermail/unixodbc-support/2010-September.txt

    Read the article

  • figuring out which field to look for a value in with SQL and perl

    - by Micah
    I'm not too good with SQL and I know there's probably a much more efficient way to accomplish what I'm doing here, so any help would be much appreciated. Thanks in advance for your input! I'm writing a short program for the local school high school. At this school, juniors and seniors who have driver's licenses and cars can opt to drive to school rather than ride the bus. Each driver is assigned exactly one space, and their DLN is used as the primary key of the driver's table. Makes, models, and colors of cars are stored in a separate cars table, related to the drivers table by the License plate number field. My idea is to have a single search box on the main GUI of the program where the school secretary can type in who/what she's looking for and pull up a list of results. Thing is, she could be typing a license plate number, a car color, make, and model, someone driver's name, some student driver's DLN, or a space number. As the programmer, I don't know what exactly she's looking for, so a couple of options come to mind for me to build to be certain I check everywhere for a match: 1) preform a couple of SELECT * FROM [tablename] SQL statements, one per table and cram the results into arrays in my program, then search across the arrays one element at a time with regex, looking for a matched pattern similar to the search term, and if I find one, add the entire record that had a match in it to a results array to display on screen at the end of the search. 2) take whatever she's looking for into the program as a scaler and prepare multiple select statements around it, such as SELECT * FROM DRIVERS WHERE DLN = $Search_Variable SELECT * FROM DRIVERS WHERE First_Name = $Search_Variable SELECT * FROM CARS WHERE LICENSE = $Search_Variable and so on for each attribute of each table, sticking the results into a results array to show on screen when the search is done. Is there a cleaner way to go about this lookup without having to make her specify exactly what she's looking for? Possibly some kind of SQL statement I've never seen before?

    Read the article

  • Getting a Specified Cast is not valid while importing data from Excel using Linq to SQL

    - by niceoneishere
    This is my second post. After learning from my first post how fantastic is to use Linq to SQL, I wanted to try to import data from a Excel sheet into my SQL database. First My Excel Sheet: it contains 4 columns namely ItemNo ItemSize ItemPrice UnitsSold I have a created a database table with the following fields table name ProductsSold Id int not null identity --with auto increment set to true ItemNo VarChar(10) not null ItemSize VarChar(4) not null ItemPrice Decimal(18,2) not null UnitsSold int not null Now I created a dal.dbml file based on my database and I am trying to import the data from excel sheet to db table using the code below. Everything is happening on click of a button. private const string forecast_query = "SELECT ItemNo, ItemSize, ItemPrice, UnitsSold FROM [Sheet1$]"; protected void btnUpload_Click(object sender, EventArgs e) { var importer = new LinqSqlModelImporter(); if (fileUpload.HasFile) { var uploadFile = new UploadFile(fileUpload.FileName); try { fileUpload.SaveAs(uploadFile.SavePath); if(File.Exists(uploadFile.SavePath)) { importer.SourceConnectionString = uploadFile.GetOleDbConnectionString(); importer.Import(forecast_query); gvDisplay.DataBind(); pnDisplay.Visible = true; } } catch (Exception ex) { Response.Write(ex.Source.ToString()); lblInfo.Text = ex.Message; } finally { uploadFile.DeleteFileNoException(); } } } // Now here is the code for LinqSqlModelImporter public class LinqSqlModelImporter : SqlImporter { public override void Import(string query) { // importing data using oledb command and inserting into db using LINQ to SQL using (var context = new WSDALDataContext()) { using (var myConnection = new OleDbConnection(base.SourceConnectionString)) using (var myCommand = new OleDbCommand(query, myConnection)) { myConnection.Open(); var myReader = myCommand.ExecuteReader(); while (myReader.Read()) { context.ProductsSolds.InsertOnSubmit(new ProductsSold() { ItemNo = myReader.GetString(0), ItemSize = myReader.GetString(1), ItemPrice = myReader.GetDecimal(2), UnitsSold = myReader.GetInt32(3) }); } } context.SubmitChanges(); } } } can someone please tell me where am I making the error or if I am missing something, but this is driving me nuts. When I debugged I am getting this error when casting from a number the value must be a number less than infinity I really appreciate it

    Read the article

  • Snapshot agent obliterates conflicts

    - by mwolfe02
    We are using merge replication in SQL Server 2000. We have a snapshot agent that runs every night that updates the publication snapshot. About six months ago we updated from SQL Server 7.0 to 2000 (that's not a typo). We noticed a sharp decline in conflicts at that time but could not track down the reason. We finally found that the daily snapshot agent is recreating the conflict tables every night. This seems to be a change in functionality from SQL Server 7.0. We were running the snapshot agent before and the conflicts would accumulate. Is there some way to prevent the data in the conflict tables from being lost when the snapshot runs? Can anyone confirm a change in behavior between 7.0 and 2000? Our current plan is to simply stop automatically updating the publication snapshot. Is that a reasonable workaround? Here is the line from the script that is adding the snapshot: exec sp_addpublication_snapshot @publication = N'MyPub' , @frequency_type = 4 , @frequency_interval = 1 , @frequency_relative_interval = 1 , @frequency_recurrence_factor = 0 , @frequency_subday = 1 , @frequency_subday_interval = 5 , @active_start_date = 0 , @active_end_date = 0 , @active_start_time_of_day = 500 , @active_end_time_of_day = 235959 Here is the step that runs in the agent job: Step Name: Run agent. Type: Replication Snapshot Command: -Publisher [WCDBS02] -PublisherDB [TaxDB] -Distributor [WCDBS02] -Publication [TaxDB] -ReplicationType 2 -DistributorSecurityMode 1 This appears to be running the Replication Snapshot Agent Utility. There is no mention on that link about dropping and recreating system conflict tables, nor is there any flag that can be set to alter this behavior.

    Read the article

  • SQL query to print mirror labels

    - by Eric
    I want to print labels in words as returned by a SQL query such as follow. 1 2 3 4 5 6 When I want to print the reverse of those labels, I have to print them as follow 3 2 1 6 5 4 In my real case, I have 5 colums by 2 rows, how can I formulate my query so that my records are ordered like the second one. The normal ordering is handled by word, so my query is like SELECT * FROM Products ORDER BY Products.id I'm using MS Access =( EDIT : Just to make it clear I'd like my records to be ordered such as 3 2 1 6 5 4 9 8 7 12 11 10 EDIT2 : my table looks like this ID ProductName 1 Product1 2 Product2 3 Product3 n Product[n] I want the ids to be returned as I mentioned above

    Read the article

  • Doctrine SQL Server uniqueidentifier isn't cast as char or nvarchar when retrieved from the database

    - by Tres
    When I retrieve a record from the database which has a column of type "uniqueidentifier", Doctrine fills it with "null" rather than the unique id from the database. Some research and testing has brought this down to a PDO/dblib driver issue. When directly querying via PDO, null is returned in place of the unique id. For reference, http://trac.doctrine-project.org/ticket/1096, has a bit on this, however, it was updated 11 months ago with no comment for resolution. A way around this, as mentioned at http://bugs.php.net/bug.php?id=24752&edit=1, is to cast the column as a char. However, it doesn't seem Doctrine exposes the native field type outside of generating models which makes it a bit hard to detect uniqueidentifier types and cast them internally when building the sql query. Has anyone found a workaround for this?

    Read the article

  • How to make a Stored Procedure that takes in XML and uses that xml as an Update + call this stored p

    - by chobo2
    Hi I am using ms sql server 2005 and I want to do a mass update. I am thinking that I might be able to do it with sending an xml document to a stored procedure. So I seen many examples on how to do it for insert CREATE PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData XML) AS INSERT INTO dbo.UserTable(CreateDate) SELECT @UpdatedProdData.value('(/ArrayOfUserTable/UserTable/CreateDate)[1]', 'DATETIME') But I am not sure how it would look like for an update. I am also unsure how do I pass in the xml through ado.net? Do I pass it as a string through a parameter or what? I know sqlDataApater has a batch update method but I am using linq to sql. So I rather keep using it. So if this works I would be able to grab all records with linq to sql and have them as objects. Then manipulate the objects and use xml seralization. Finally I could just use ado.net simple to send the xml to the server. This might be slower then the sqlDataAdapter but I am willing to take that hit if I can keep using objects.

    Read the article

  • LINQ to SQL - Insert yielding strange behavior.

    - by Isaac
    Hi, I'm trying to insert several newly created items to the database. I have a LINQ2SQL generated class called "Order". Inside order, there's a property called "OrderItems" which is also generated by LINQ2SQL and represents the Items of that Order. So far so good. The problem I'm having right now, is when I try to add more than one newly created OrderItem inside Order. I.E: Order o = orderWorker.GetById( 10 ); for( int i=0; i < 5; ++i ) { OrderItem oi =new OrderItem { Order = order, Price = 100, ShippingPrice = 100, ShippingMethod = ... }; o.OrderItems.Add( oi ); } context.SubmitChanges(); Unfortunately, only a single entity is being added. Yes, I checked the generated SQL by adding Context.Log = Console.Out, and yes, only one statement was created. Any clues? By the way I know I'm not using InsertOnSubmit, by the documentation says: You can explicitly request Inserts by using InsertOnSubmit. Alternatively, LINQ to SQL can infer Inserts by finding objects connected to one of the known objects that must be updated. For example, if you add an Untracked object to an EntitySet(TEntity) or set an EntityRef(TEntity) to an Untracked object, you make the Untracked object reachable by way of tracked objects in the graph. While processing SubmitChanges, LINQ to SQL traverses the tracked objects and discovers any reachable persistent objects that are not tracked. Such objects are candidates for insertion into the database. Thank you very much for your time.

    Read the article

  • MySQL to SQL Server ODBC Connector?

    - by Scott C.
    My boss wants to have data in MySQL DBs used for our website to be "linked and synced" with a Financial Server that has its DB in SQL Server. Sooooo...even though I have no idea how to accomplish this, this just sounds like an absolute nightmare especially since the MySQL DB is most likely going to be hosted in the cloud and not on a machine next to the Financial Server. Any ideas how to accomplish this? (within reason?) Also, his big thing is he wants to basically pull up the data from any record a user enters and using data pulled from that do all sorts of calculations using ANOTHER program that stores its data (apparently) in SQL Server. Thinking of all the data I might have to convert makes me very uneasy. Please tell a ODBC eliminates complicated junk like this. :/ I'm trying to talk him into just having MySQL do a nightly dump into a CSV file or something and using that (rather than connector) to update the SQL Server DBs. I guess I'm just not that comfortable with a server and/or programming I have no say over being connected DIRECTLY to my MySQL DB for the website. If there's no good answer for this, can anyone offer a suggestion as to what I can say to talk him out of this? (I'm a low-level IT guy w/ a decent grasp on programming...but I'm no expert - should I try to push this off to a seasoned IT pro?) Thanks in advance.

    Read the article

  • pl/sql does not work with %rowtype

    - by Manolo
    I want to do a simple PL/SQL program on the Oracle 10g internet environment. The program is: DECLARE stud_rec students%ROWTYPE; last_name VARCHAR2:='Clinton'; BEGIN SELECT * INTO stud_rec FROM students WHERE student_id=100; END; I have a table called students with data inside of it. The issue is that when I want to run this in the SQL command window I got this message: ORA-06550: line 3, column 11: PLS-00215: String length constraints must be in range (1 .. 32767) I have checked the syntax and I cannot find the error. Any help? Thanks in advance

    Read the article

  • SQL Server race condition issue with range lock

    - by Freek
    I'm implementing a queue in SQL Server (please no discussions about this) and am running into a race condition issue. The T-SQL of interest is the following: set transaction isolation level serializable begin tran declare @RecordId int declare @CurrentTS datetime2 set @CurrentTS=CURRENT_TIMESTAMP select top 1 @RecordId=Id from QueuedImportJobs with (updlock) where Status=@Status and (LeaseTimeout is null or @CurrentTS>LeaseTimeout) order by Id asc if @@ROWCOUNT> 0 begin update QueuedImportJobs set LeaseTimeout = DATEADD(mi,5,@CurrentTS), LeaseTicket=newid() where Id=@RecordId select * from QueuedImportJobs where Id = @RecordId end commit tran RecordId is the PK and there is also an index on Status,LeaseTimeout. What I'm basically doing is select a record of which the lease happens to be expired, while simultaneously updating the lease time with 5 minutes and setting a new lease ticket. So the problem is that I'm getting deadlocks when I run this code in parallel using a couple of threads. I've debugged it up to the point where I found out that the update statement sometimes gets executing twice for the same record. Now, I was under the impression that the with (updlock) should prevent this (it also happens with xlock btw, not with tablockx). So it actually look like there is a RangeS-U and a RangeX-X lock on the same range of records, which ought to be impossible. So what am I missing? I'm thinking it might have something to do with the top 1 clause or that SQL Server does not know that where Id=@RecordId is actually in the locked range? Deadlock graph: Table schema (simplified):

    Read the article

< Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >