Search Results

Search found 63598 results on 2544 pages for 'sql add on'.

Page 299/2544 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • SQL Databases and table design/organization

    - by John McMullen
    (NOOB disclaimer) I'm working on a system (a type of map), that is accessed mostly via 3 fields: ID (auto incremented), X coordinate, and Y coordinate. As it is right now, i have all data on the map, stored in 1 table. Whenever the map display is loaded it simply queries the database for contents in x and y, and the DB gives the data (other fields in the same entry). If an item on the map is doing something, it has a flag saying its doing something, and then has an ID of the action in another table holding that type of 'actions'. Essentially, for all map data, its stored in 1 table. All actions of a certain type are stored in their own table. I'm a noob, and i'm wondering what the most effective/efficient structure for such a design? (a map that has items, and each item has stats/actions). I'm using PHP atm, using standard SQL queries to get my data. Should i split up the tables so that there are only x number of entries on a table? (coord range limits)? Should it just keep growing and growing? There's a lot of queries to the table... so just tryin to see what is best :/

    Read the article

  • T-SQL Unique constraint locked the SQL server

    - by PaN1C_Showt1Me
    HI ! This is my table: CREATE TABLE [ORG].[MyTable]( .. [my_column2] UNIQUEIDENTIFIER NOT NULL CONSTRAINT FK_C1 REFERENCES ORG.MyTable2 (my_column2), [my_column3] INT NOT NULL CONSTRAINT FK_C2 REFERENCES ORG.MyTable3 (my_column3) .. ) I've written this constraint to assure that combination my_column2 and my_column3 is always unique. ALTER TABLE [ORG].[MyTable] ADD CONSTRAINT UQ_MyConstraint UNIQUE NONCLUSTERED ( my_column2, my_column3 ) But then suddenly.. The DB stopped responding.. there is a lock or something.. Do you have any idea why? What is bad with the constraint?

    Read the article

  • collation conflict SQL/SERVER 2008

    - by vikitor
    Hello, I've been going around this but I haven't found a solution for my problem. My sql query is: SELECT dbo.Country.CtyRecID, dbo.Country.CtyShort, dbo.Notification.NotRecID, dbo.Notification.NotName, dbo.TemporalSuspension.TCtsCode, dbo.TemporalSuspension.TCtsCodeRecID, dbo.TaxPhylum.PhyName AS Taxon, dbo.TemporalSuspension.TCtsNotes, dbo.TemporalSuspension.TCtsRecID, dbo.TemporalSuspension.TCtsKgmRecID, CASE dbo.TemporalSuspension.TCtsKgmRecID WHEN 1 THEN 'Animals' WHEN 2 THEN 'Plants' ELSE 'All' END AS Kingdom FROM dbo.TemporalSuspension INNER JOIN dbo.Notification ON dbo.TemporalSuspension.TCtsStartNotRecID = dbo.Notification.NotRecID INNER JOIN dbo.Country ON dbo.TemporalSuspension.TCtsCtyRecID = dbo.Country.CtyRecID INNER JOIN dbo.TaxPhylum ON dbo.TemporalSuspension.TCtsCodeRecID = dbo.TaxPhylum.PhyRecID AND dbo.TemporalSuspension.TCtsCode LIKE 'PHY' UNION ALL SELECT dbo.Country.CtyRecID, dbo.Country.CtyShort, dbo.Notification.NotRecID, dbo.Notification.NotName, dbo.TemporalSuspension.TCtsCode, dbo.TemporalSuspension.TCtsCodeRecID, dbo.TaxClass.ClaName AS Taxon, dbo.TemporalSuspension.TCtsNotes, dbo.TemporalSuspension.TCtsRecID, dbo.TemporalSuspension.TCtsKgmRecID, CASE dbo.TemporalSuspension.TCtsKgmRecID WHEN 1 THEN 'Animals' WHEN 2 THEN 'Plants' ELSE 'All' END AS Kingdom FROM dbo.TemporalSuspension INNER JOIN dbo.Notification ON dbo.TemporalSuspension.TCtsStartNotRecID = dbo.Notification.NotRecID INNER JOIN dbo.Country ON dbo.TemporalSuspension.TCtsCtyRecID = dbo.Country.CtyRecID INNER JOIN dbo.TaxClass ON dbo.TemporalSuspension.TCtsCodeRecID = dbo.TaxClass.ClaRecID AND dbo.TemporalSuspension.TCtsCode LIKE 'CLA' But I don't understand why it doesn't work, I keep getting this error : Cannot resolve collation conflict for column 7 in SELECT statement. What's wrong? I've used this other times and I never got this problem. thanks

    Read the article

  • SQL indexes for "not equal" searches

    - by bortzmeyer
    The SQL index allows to find quickly a string which matches my query. Now, I have to search in a big table the strings which do not match. Of course, the normal index does not help and I have to do a slow sequential scan: essais=> \d phone_idx Index "public.phone_idx" Column | Type --------+------ phone | text btree, for table "public.phonespersons" essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone = '+33 1234567'; QUERY PLAN ------------------------------------------------------------------------------- Index Scan using phone_idx on phonespersons (cost=0.00..8.41 rows=1 width=4) Index Cond: (phone = '+33 1234567'::text) (2 rows) essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone != '+33 1234567'; QUERY PLAN ---------------------------------------------------------------------- Seq Scan on phonespersons (cost=0.00..18621.00 rows=999999 width=4) Filter: (phone <> '+33 1234567'::text) (2 rows) I understand (see Mark Byers' very good explanations) that PostgreSQL can decide not to use an index when it sees that a sequential scan would be faster (for instance if almost all the tuples match). But, here, "not equal" searches are really slower. Any way to make these "is not equal to" searches faster? Here is another example, to address Mark Byers' excellent remarks. The index is used for the '=' query (which returns the vast majority of tuples) but not for the '!=' query: essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) = 'fr'; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------ Index Scan using tld_idx on emailspersons (cost=0.25..4010.79 rows=97033 width=4) (actual time=0.137..261.123 rows=97110 loops=1) Index Cond: (tld(email) = 'fr'::text) Total runtime: 444.800 ms (3 rows) essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) != 'fr'; QUERY PLAN -------------------------------------------------------------------------------------------------------------------- Seq Scan on emailspersons (cost=0.00..27129.00 rows=2967 width=4) (actual time=1.004..1031.224 rows=2890 loops=1) Filter: (tld(email) <> 'fr'::text) Total runtime: 1037.278 ms (3 rows) DBMS is PostgreSQL 8.3 (but I can upgrade to 8.4).

    Read the article

  • SQL connection to database repeating

    - by user175084
    ok now i am using the SQL database to get the values from different tables... so i make the connection and get the values like this: DataTable dt = new DataTable(); SqlConnection connection = new SqlConnection(); connection.ConnectionString = ConfigurationManager.ConnectionStrings["XYZConnectionString"].ConnectionString; connection.Open(); SqlCommand sqlCmd = new SqlCommand("SELECT * FROM Machines", connection); SqlDataAdapter sqlDa = new SqlDataAdapter(sqlCmd); sqlCmd.Parameters.AddWithValue("@node", node); sqlDa.Fill(dt); connection.Close(); so this is one query on the page and i am calling many other queries on the page. So do i need to open and close the connection everytime...??? also if not this portion is common in all: DataTable dt = new DataTable(); SqlConnection connection = new SqlConnection(); connection.ConnectionString = ConfigurationManager.ConnectionStrings["XYZConnectionString"].ConnectionString; connection.Open(); can i like put it in one function and call it instead.. the code would look cleaner... i tried doing that but i get errors like: Connection does not exist in the current context. any suggestions??? thanks

    Read the article

  • Combining two-part SQL query into one query

    - by user332523
    Hello, I have a SQL query that I'm currently solving by doing two queries. I am wondering if there is a way to do it in a single query that makes it more efficient. Consider two tables: Transaction_Entries table and Transactions, each one defined below: Transactions - id - reference_number (varchar) Transaction_Entries - id - account_id - transaction_id (references Transactions table) Notes: There are multiple transaction entries per transaction. Some transactions are related, and will have the same reference_number string. To get all transaction entries for Account X, then I would do SELECT E.*, T.reference_number FROM Transaction_Entries E JOIN Transactions T ON (E.transaction_id=T.id) where E.account_id = X The next part is the hard part. I want to find all related transactions, regardless of the account id. First I make a list of all the unique reference numbers I found in the previous result set. Then for each one, I can query all the transactions that have that reference number. Assume that I hold all the rows from the previous query in PreviousResultSet UniqueReferenceNumbers = GetUniqueReferenceNumbers(PreviousResultSet) // in Java foreach R in UniqueReferenceNumbers // in Java SELECT * FROM Transaction_Entries where transaction_id IN (SELECT * FROM Transactions WHERE reference_number=R Any suggestions how I can put this into a single efficient query?

    Read the article

  • Big table or multiple separate tables? (database design question)

    - by Khou
    This is a database design question. I want to build an invoice web application, an invoice can have many items, and each user can have an inventory list of product items that they can store and choose to add to an invoice item. My questions are: 1. Should I store all product inventory for all users using my application under one single table? Or have a separate product inventory table created for each user? 2. Is this even possible? 1 table is easier, but what if this single table grows too big, will I have a problem? (primary key INT).

    Read the article

  • Problem with SQL Server "EXECUTE AS"

    - by Vilx-
    I've got the following setup: There is a SQL Server DB with several tables that have triggers set on them (that collect history data). These triggers are CLR stored procedures with EXECUTE AS 'HistoryUser'. The HistoryUser user is a simple user in the database without a login. It has enough permissions to read from all tables and write to the history table. When I backup the DB and then restore it to another machine (Virtual Machine in this case, but it does not matter), the triggers don't work anymore. In fact, no impersonation for the user works anymore. Even a simple statement such as this exec ('select 3') as user='HistoryUser' produces an error: Cannot execute as the database principal because the principal "HistoryUser" does not exist, this type of principal cannot be impersonated, or you do not have permission. I read in MSDN that this can occur if the DB owner is a domain user, but it isn't. And even if I change it to anything else (their recommended solution) this problem remains. If I create another user without login, I can use it for impersonation just fine. That is, this works just fine: create user TestUser without login go exec ('select 3') as user='TestUser' I do not want to recreate all those triggers, so is there any way how I can make the existing HistoryUser work? Bump: Sorry, but this is kinda urgent...

    Read the article

  • Adding Days To Date in SQL

    - by Coding Noob
    I am trying to get data from my Database of those who have upcoming birth days in next few days(declared earlier) it's working fine for days but this query will not work if i add 24 days to current date cause than it will need change in month. i wonder how can i do it declare @date int=10, @month int=0 select * from STUDENT_INFO where DATEPART(DD,STDNT_DOB) between DATEPART(DD,GETDATE()) and DATEPART(DD,DATEADD(DD,@date,GETDATE())) and DATEPART(MM,STDNT_DOB) = DATEPART(MM,DATEADD(MM,@month,GETDATE())) This query works fine but it only checks date between 8 & 18 but if i use it like this declare @date int=30, @month int=0 select * from STUDENT_INFO where DATEPART(DD,STDNT_DOB) between DATEPART(DD,GETDATE()) and DATEPART(DD,DATEADD(DD,@date,GETDATE())) and DATEPART(MM,STDNT_DOB) = DATEPART(MM,DATEADD(MM,@month,GETDATE())) it will return nothing since it require addition in month as well If I Use it like this declare @date int=40, @month int=0 select * from STUDENT_INFO where DATEPART(DD,STDNT_DOB) between DATEPART(DD,GETDATE()) and DATEADD(DD,@date,GETDATE()) and DATEPART(MM,STDNT_DOB) = DATEPART(MM,DATEADD(MM,@month,GETDATE())) than it will return results till the last of this month but will not show till 18/12 which was required

    Read the article

  • How to extract the Sql Command from a Complied Linq Query

    - by Harry
    In normal (not compiled) Linq to Sql queries you can extract the SQLCommand from the IQueryable via the following code: SqlCommand cmd = (SqlCommand)table.Context.GetCommand(query); Is it possible to do the same for a compiled query? The following code provides me with a delegate to a compiled query: private static readonly Func<Data.DAL.Context, string, IQueryable<Word>> Query_Get = CompiledQuery.Compile<Data.DAL.Context, string, IQueryable<Word>>( (context, name) => from r in context.GetTable<Word>() where r.Name == name select r); When i use this to generate the IQueryable and attempt to extract the SqlCommand it doesn't seem to work. When debugging the code I can see that the SqlCommand returned has the 'very' useful CommandText of 'SELECT NULL AS [EMPTY]' using (var db = new Data.DAL.Context()) { IQueryable<Word> query = Query_Get(db, "word"); SqlCommand cmd = (SqlCommand)db.GetCommand(query); Console.WriteLine(cmd != null ? cmd.CommandText : "Command Not Found"); } I can't find anything in google about this particular scenario, as no doubt it is not a common thing to attempt... So.... Any thoughts?

    Read the article

  • Passing a WHERE clause for a Linq-to-Sql query as a parameter

    - by Mantorok
    Hi all This is probably pushing the boundaries of Linq-to-Sql a bit but given how versatile it has been so far I thought I'd ask. I have 3 queries that are selecting identical information and only differ in the where clause, now I know I can pass a delegate in but this only allows me to filter the results already returned, but I want to build up the query via parameter to ensure efficiency. Here is the query: from row in DataContext.PublishedEvents join link in DataContext.PublishedEvent_EventDateTimes on row.guid equals link.container join time in DataContext.EventDateTimes on link.item equals time.guid where row.ApprovalStatus == "Approved" && row.EventType == "Event" && time.StartDate <= DateTime.Now.Date.AddDays(1) && (!time.EndDate.HasValue || time.EndDate.Value >= DateTime.Now.Date.AddDays(1)) orderby time.StartDate select new EventDetails { Title = row.EventName, Description = TrimDescription(row.Description) }; The code I want to apply via a parameter would be: time.StartDate <= DateTime.Now.Date.AddDays(1) && (!time.EndDate.HasValue || time.EndDate.Value >= DateTime.Now.Date.AddDays(1)) Is this possible? I don't think it is but thought I'd check out first. Thanks

    Read the article

  • SQL query to calculate running group counts on time-phased data

    - by spong
    I have some data, like this: BUG DATE STATUS ---- ---------------------- -------- 9012 18/03/2008 9:08:44 AM OPEN 9012 18/03/2008 9:10:03 AM OPEN 9012 28/03/2008 4:55:03 PM RESOLVED 9012 28/03/2008 5:25:00 PM CLOSED 9013 18/03/2008 9:12:59 AM OPEN 9013 18/03/2008 9:15:06 AM RESOLVED 9013 18/03/2008 9:16:44 AM CLOSED 9014 18/03/2008 9:17:54 AM OPEN 9014 18/03/2008 9:18:31 AM RESOLVED 9014 18/03/2008 9:19:30 AM CLOSED 9015 18/03/2008 9:22:40 AM OPEN 9015 18/03/2008 9:23:03 AM RESOLVED 9015 19/03/2008 12:27:08 PM CLOSED 9016 18/03/2008 9:24:20 AM OPEN 9016 18/03/2008 9:24:35 AM RESOLVED 9016 19/03/2008 12:28:14 PM CLOSED 9017 18/03/2008 9:25:47 AM OPEN 9017 18/03/2008 9:26:02 AM RESOLVED 9017 19/03/2008 12:30:30 PM CLOSED Which I would like to transform into something like this: DATE OPEN RESOLVED CLOSED ---------------------- -------- -------- -------- 18/03/2008 9:08:44 AM 1 0 0 18/03/2008 9:12:59 AM 2 0 0 18/03/2008 9:15:06 AM 1 1 0 18/03/2008 9:16:44 AM 1 0 1 18/03/2008 9:17:54 AM 2 0 1 18/03/2008 9:18:31 AM 1 1 0 18/03/2008 9:19:30 AM 1 0 2 18/03/2008 9:22:40 AM 2 0 2 18/03/2008 9:23:03 AM 1 1 2 18/03/2008 9:24:20 AM 2 1 2 18/03/2008 9:24:35 AM 1 2 2 18/03/2008 9:25:47 AM 2 2 2 18/03/2008 9:26:02 AM 1 3 2 19/03/2008 12:27:08 PM 1 2 3 19/03/2008 12:28:14 PM 1 1 4 19/03/2008 12:30:30 PM 1 0 5 28/03/2008 4:55:03 PM 0 1 5 28/03/2008 5:25:00 PM 0 0 6 i.e. keeping running counts of bugs with each status. This is easy enough to code up using cursors, but I'm wondering if any of you SQL gurus out there can help with a query to achieve this? Ideally for mysql, but I'm curious to see anything that will work.

    Read the article

  • i need to use string variable in the Proc in sql server database 2005

    - by bassam
    I have this procedure CREATE Proc [dbo].Salse_Ditail -- Add the parameters for the stored procedure here @Report_Form varchar(1) , @DateFrom datetime , @DateTo datetime , @COMPANYID varchar(3), @All varchar(1) , @All1 varchar(1) , @All2 varchar(1) , @All3 varchar(1) , @All4 varchar(1) , @All5 varchar(1) , @Sector varchar(10), @Report_Parameter nvarchar(max) as BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. DECLARE @STRWhere nvarchar(max) IF @All5=0 AND @All4=0 AND @All3=0 AND @All2=0 AND @All1=0 and @All=1 set @STRWhere= N'and Sector_id = @Sector' if @Report_Form =1 or @Report_Form =3 or @Report_Form =4 SELECT RETURNREASONCODEID, SITE,SITE_NAME,Factory_id,Factory_Name,Sector_id,sector_name,Customer_name, Customer_id,ITEMID,ITEMNAME,SALESMANID,SALESMAN_NAME,Net_Qty,Net_Salse,Gross_Sales,Gross_Qty, NETWEIGHT_Gross,NETWEIGHT_salse_Gross,NETWEIGHT_NET,NETWEIGHT_salse_NET,Return_Sales,Free_Good, CollectionAmount FROM hal_bas_new_rep WHERE DATAAREAID =@COMPANYID AND INVOICEDATE >= @DateFrom AND INVOICEDATE <= @DateTo and Report_Activti = @Report_Form if @Report_Form =2 SELECT RETURNREASONCODEID , RETURNREASONDESC, SITE , SITE_NAME , Factory_id , Factory_Name , Sector_id , sector_name , Customer_name , Customer_id , ITEMID , ITEMNAME , SALESMANID , SALESMAN_NAME , Return_Sales FROM dbo.hal_bas_new_rep WHERE DATAAREAID =@COMPANYID AND INVOICEDATE >= @DateFrom AND INVOICEDATE <= @DateTo and Report_Activti = @Report_Form and RETURNREASONCODEID in ( SELECT Val FROM dbo.fn_String_To_Table(@Report_Parameter,',',1) ) /* @STRWhere // question: how can I use the variable here? */ end GO As you see I'm constructing a condition for the WHERE clause in a variable, but I don't know how to use it.

    Read the article

  • Saving ntext data from SQL Server to file directory using asp

    - by April
    A variety of files (pdf, images, etc.) are stored in a ntext field on a MS SQL Server. I am not sure what type is in this field, other than it shows question marks and undefined characters, I am assuming they are binary type. The script is supposed to iterate through the rows and extract and save these files to a temp directory. "filename" and "contenttype" are given, and "data" is whatever is in the ntext field. I have tried several solutions: 1) data.SaveToFile "/temp/"&filename, 2 Error: Object required: '????????????????????' ??? 2) File.WriteAllBytes "/temp/"&filename, data Error: Object required: 'File' I have no idea how to import this, or the Server for MapPath. (Cue: what a noob!) 3) Const adTypeBinary = 1 Const adSaveCreateOverWrite = 2 Dim BinaryStream Set BinaryStream = CreateObject("ADODB.Stream") BinaryStream.Type = adTypeBinary BinaryStream.Open BinaryStream.Write data BinaryStream.SaveToFile "C:\temp\" & filename, adSaveCreateOverWrite Error: Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another. 4) Response.ContentType = contenttype Response.AddHeader "content-disposition","attachment;" & filename Response.BinaryWrite data response.end This works, but the file should be saving to the server instead of popping up save-as dialog. I am not sure if there is a way to save the response to file. Thanks for shedding light on any of these problems!

    Read the article

  • Insert or Update using Oracle and PL/SQL

    - by Shane
    I have a PL/SQL function that performs an update/insert on an Oracle database that maintains a target total and returns the difference between the existing value and the new value. Here is the code I have so far: FUNCTION calcTargetTotal(accountId varchar2, newTotal numeric ) RETURN number is oldTotal numeric(20,6); difference numeric(20,6); begin difference := 0; begin select value into oldTotal from target_total WHERE account_id = accountId for update of value; if (oldTotal != newTotal) then update target_total set value = newTotal WHERE account_id = accountId difference := newTotal - oldTotal; end if; exception when NO_DATA_FOUND then begin difference := newTotal; insert into target_total ( account_id, value ) values ( accountId, newTotal ); -- sometimes a race condition occurs and this stmt fails -- in those cases try to update again exception when DUP_VAL_ON_INDEX then begin difference := 0; select value into oldTotal from target_total WHERE account_id = accountId for update of value; if (oldTotal != newTotal) then update target_total set value = newTotal WHERE account_id = accountId difference := newTotal - oldTotal; end if; end; end; end; return difference end calcTargetTotal; This works as expected in unit tests with multiple threads never failing. However when loaded on a live system we have seen this fail with a stack trace looking like this: ORA-01403: no data found ORA-00001: unique constraint () violated ORA-01403: no data found The line numbers (which I have removed since they are meaningless out of context) verify that the first update fails due to no data, the insert fail due to uniqueness, and the 2nd update is failing with no data, which should be impossible. From what I have read on other thread a MERGE statement is also not atomic and could suffer similar problems. Does anyone have any ideas how to prevent this from occurring?

    Read the article

  • Sql Server Replication: Snapshot vs Merge

    - by Zyphrax
    Background information Let's say I have two database servers, both SQL Server 2008. One is in my LAN (ServerLocal), the other one is on a remote hosting environment (ServerRemote). I have created a database on ServerLocal and have an exact copy of that database on ServerRemote. The database on ServerRemote is part of a web application and I would like to keep it's data up-to-date with the data in the database ServerLocal. ServerLocal is able to communicate with ServerRemote, this is one-way traffic. Communication from ServerRemote to ServerLocal isn't available. Current solution I thought it would be a nice solution to use replication. So I've made ServerLocal a publisher and subscriptions are pushed to the ServerRemote. This works fine, when a snapshot is transfered to ServerRemote the existing data will be purged and the ServerRemote database is once again an exact replica of the database on ServerLocal. The problem Records that exist on ServerRemote that don't exist on ServerLocal are removed. This doesn't matter for most of my tables but in some of my tables I'd like to keep the existing data (aspnet_users for instance), and update the records if necessary. What kind of replication fits my problem?

    Read the article

  • Error: Too Many Arguments Specified when Inserting Values from ASP.NET to SQL Server

    - by SidC
    Good Afternoon All, I have a wizard control that contains 20 textboxes for part numbers and another 20 for quantities. I want the part numbers and quantities loaded into the following table: USE [Diel_inventory] GO /****** Object: Table [dbo].[QUOTEDETAILPARTS] Script Date: 05/09/2010 16:26:54 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[QUOTEDETAILPARTS]( [QuoteDetailPartID] [int] IDENTITY(1,1) NOT NULL, [QuoteDetailID] [int] NOT NULL, [PartNumber] [float] NULL, [Quantity] [int] NULL, CONSTRAINT [pkQuoteDetailPartID] PRIMARY KEY CLUSTERED ( [QuoteDetailPartID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[QUOTEDETAILPARTS] WITH CHECK ADD CONSTRAINT [fkQuoteDetailID] FOREIGN KEY([QuoteDetailID]) REFERENCES [dbo].[QUOTEDETAIL] ([ID]) ON UPDATE CASCADE ON DELETE CASCADE GO Here's the snippet from my sproc for this insert: set @ID=scope_identity() Insert into dbo.QuoteDetailParts (QuoteDetailPartID, QuoteDetailID, PartNumber, Quantity) values (@ID, @QuoteDetailPartID, @PartNumber, @Quantity) When I run the ASPX page, I receive an error that there are too many arguments specified for my stored procedure. I understand why I'm getting the error, given the above table layout. However, I need help in structuring my insert syntax to look for values in all 20 PartNumber and Quantity field pairs. Thanks, Sid

    Read the article

  • Get the identity value from an insert via a .net dataset update call

    - by DeveloperMCT
    Here is some sample code that inserts a record into a db table: Dim ds As DataSet = New DataSet() da.Fill(ds, "Shippers") Dim RowDatos As DataRow RowDatos = ds.Tables("Shippers").NewRow RowDatos.Item("CompanyName") = "Serpost Peru" RowDatos.Item("Phone") = "(511) 555-5555" ds.Tables("Shippers").Rows.Add(RowDatos) Dim custCB As SqlCommandBuilder = New SqlCommandBuilder(da) da.Update(ds, "Shippers") It inserts a row in the Shippers Table, the ShippersID is a Indentity value. My question is how can i retrieve the Identity value generated when the new row is inserted in the Shippers table. I have done several web searches and the sources I've seen on the net don't answer it speccifically or go on to talk about stored procedures. Any help would be appreciated. Thanks!

    Read the article

  • Which isolation level should I use for the following insert-if-not-present transaction?

    - by Steve Guidi
    I've written a linq-to-sql program that essentially performs an ETL task, and I've noticed many places where parallelization will improve its performance. However, I'm concerned about preventing uniquness constraint violations when two threads perform the following task (psuedo code). Record CreateRecord(string recordText) { using (MyDataContext database = GetDatabase()) { Record existingRecord = database.MyTable.FirstOrDefault(record.KeyPredicate()); if(existingRecord == null) { existingRecord = CreateRecord(recordText); database.MyTable.InsertOnSubmit(existingRecord); } database.SubmitChanges(); return existingRecord; } } In general, this code executes a SELECT statement to test for record existance, followed by an INSERT statement if the record doesn't exist. It is encapsulated by an implicit transaction. When two threads run this code for the same instance of recordText, I want to prevent them from simultaneously determining that the record doesn't exist, thereby both attempting to create the same record. An isolation level and explicit transaction will work well, except I'm not certain which isolation level I should use -- Serializable should work, but seems too strict. Is there a better choice?

    Read the article

  • Sql Server 2005 multiple insert with c#

    - by bottlenecked
    Hello. I have a class named Entry declared like this: class Entry{ string Id {get;set;} string Name {get;set;} } and then a method that will accept multiple such Entry objects for insertion into the database using ADO.NET: static void InsertEntries(IEnumerable<Entry> entries){ //build a SqlCommand object using(SqlCommand cmd = new SqlCommand()){ ... const string refcmdText = "INSERT INTO Entries (id, name) VALUES (@id{0},@name{0});"; int count = 0; string query = string.Empty; //build a large query foreach(var entry in entries){ query += string.Format(refcmdText, count); cmd.Parameters.AddWithValue(string.Format("@id{0}",count), entry.Id); cmd.Parameters.AddWithValue(string.Format("@name{0}",count), entry.Name); count++; } cmd.CommandText=query; //and then execute the command ... } } And my question is this: should I keep using the above way of sending multiple insert statements (build a giant string of insert statements and their parameters and send it over the network), or should I keep an open connection and send a single insert statement for each Entry like this: using(SqlCommand cmd = new SqlCommand(){ using(SqlConnection conn = new SqlConnection(){ //assign connection string and open connection ... cmd.Connection = conn; foreach(var entry in entries){ cmd.CommandText= "INSERT INTO Entries (id, name) VALUES (@id,@name);"; cmd.Parameters.AddWithValue("@id", entry.Id); cmd.Parameters.AddWithValue("@name", entry.Name); cmd.ExecuteNonQuery(); } } } What do you think? Will there be a performance difference in the Sql Server between the two? Are there any other consequences I should be aware of? Thank you for your time!

    Read the article

  • Converting rows to Columns in SQL

    - by Ram
    I have a table (actually a view, but simplified my example to a table) which gives me some data like this id CompanyName website 1 Google google.com 2 Google google.net 3 Google google.org 4 Google google.in 5 Google google.de 6 Microsoft Microsoft.com 7 Microsoft live.com 8 Microsoft bing.com 9 Microsoft hotmail.com I am looking to convert it to get a result like this CompanyName website1 website2 website3 website 4 website5 website6 ----------- ------------- ---------- ---------- ----------- --------- -------- Google google.com google.net google.org google.in google.de NULL Microsoft Microsoft.com live.com bing.com hotmail.com NULL NULL I have looked into pivot but looks like the record(row values) cannot be dynamic (i.e can only be certain predefined values). Also, if there are more than 6 websites, I want to limit it to the first 6 Dynamic pivot makes sense, but I would have to incorporate it into my view ?? Is there a simpler solution for this ? Here are the SQL scripts CREATE TABLE [dbo].[Company]( [id] [int] NULL, [CompanyName] [varchar](50) NULL, [website] [varchar](50) NULL ) ON [PRIMARY] GO insert into company values (1,'Google','google.com') insert into company values (2,'Google','google.net') insert into company values (3,'Google','google.org') insert into company values (4,'Google','google.in') insert into company values (5,'Google','google.de') insert into company values (6,'Microsoft','Microsoft.com') insert into company values (7,'Microsoft','live.com') insert into company values (8,'Microsoft','bing.com') insert into company values (9,'Microsoft','hotmail.com')

    Read the article

  • LINQ-to-SQL - Join, Count

    - by ile
    I have following query: var result = ( from role in db.Roles join user in db.Users on role.RoleID equals user.RoleID where user.CreatedByUserID == userID orderby user.FirstName ascending select new UserViewModel { UserID = user.UserID, PhotoID = user.PhotoID.ToString(), FirstName = user.FirstName, LastName = user.LastName, FullName = user.FirstName + " " + user.LastName, Email = user.Email, PhoneNumber = user.Phone, AccessLevel = role.Name }); Now, I need to modify this query... Other table I have is table Deals. I would like to count how many deals user created last month and last year. I tried something like this: var result = ( from role in db.Roles join user in db.Users on role.RoleID equals user.RoleID //join dealsYear in db.Deals on date.Year equals dealsYear.DateCreated.Year join dealsYear in ( from deal in db.Deals group deal by deal.DateCreated into d select new { DateCreated = d.Key, dealsCount = d.Count() } ) on date.Year equals dealsYear.DateCreated.Year into dYear join dealsMonth in ( from deal in db.Deals group deal by deal.DateCreated into d select new { DateCreated = d.Key, dealsCount = d.Count() } ) on date.Month equals dealsMonth.DateCreated.Month into dMonth where user.CreatedByUserID == userID orderby user.FirstName ascending select new UserViewModel { UserID = user.UserID, PhotoID = user.PhotoID.ToString(), FirstName = user.FirstName, LastName = user.LastName, FullName = user.FirstName + " " + user.LastName, Email = user.Email, PhoneNumber = user.Phone, AccessLevel = role.Name, DealsThisYear = dYear, DealsThisMonth = dMonth }); but here is even syntax not correct. Any idea? Btw, is there any good book of LINQ to SQL with examples?

    Read the article

  • sql server - framework 4 - IIS 7 weird sort from db to page

    - by ila
    I am experiencing a strange behavior when reading a resultset from database in a calling method. The sort of the rows is different from what the database should return. My farm: - database server: sql server 2008 on a WinServer 2008 64 bit - web server: a couple of load balanced WinServer 2008 64 bit running IIS 7 The application runs on a v4.0 app pool, set to enable 32bit applications Here's a description of the problem: - a stored procedure is called, that returns a resultset sorted on a particular column - I can see thru profiler the call to the SP, if I run the statement I see correct sorting - the calling page gets the results, and before any further elaboration logs the rows immediately after the SP execution - the results are in a completely different order (I cannot even understand if they are sorted in any way) Some details on the Stored Procedure: - it is called by code using a SqlDatAdapter - it has also an output value (a count of the rows) that is read correctly - which sort field is to be used is passed as a parameter - makes use of temp tables to collect data and perform the desired sort Any idea on what I could check? Same code and same database work correctly in a test environment, 32 bit and not load balanced.

    Read the article

  • Get dragged / saved items state back from Sql Server

    - by user571507
    Ok i saw many post's on how to serialize the value of dragged items to get hash and they tell how to save them. Now the question is how do i persist the dragged items the next time when user log's in using the has value that i got eg: <ul class="list"> <li id="id_1"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_2"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_3"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_4"> <div class="item ui-corner-all ui-widget"> </div> </li> </ul> which on serialize will give "id[]=1&id[]=2&id[]=3&id[]=4" Now think that i saved it to Sql server database in a single field called SortOrder. Now how do i get the items to these order again ? the code to make these sort is below,without which people didn't know which library i had used to sort and serialize <script type="text/javascript"> $(document).ready(function() { $(".list li").css("cursor", "move"); $(".list").sortable(); }); </script>

    Read the article

  • Best Method For High Data Availability for SQL Server

    - by omatase
    I have a web service that runs 24/7. Periodically it needs to refresh its database with data from another web service. There is a lot of data. It's tens of thousands of rows. (no, I don't mean this is a lot of data for SQL Server, just trying to point out that I expect it to take some time to come down the pipe from the other web service) The data refresh can take between 5 and 10 minutes. The actual data update portion of that is between 1 and 2 minutes. This means the service would be down for all intents and purposes when consumers would be requesting this type of data. I would like to implement a system where data is always available. The only thing that comes to mind is some type of system where I maintain two separate databases. I populate the inactive one, swapping it to active before populating the other one. I'm not sure I know the best way to do this. My current ideas just revolve around two sets of the schema in a single database (using views to access the active set) or two databases each with the same schema. The application would rotate between the two databases. Any suggestions from someone who has done something like this before?

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >