Search Results

Search found 27519 results on 1101 pages for 'sql learner'.

Page 593/1101 | < Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >

  • How to make this sub-sub-query work?

    - by Josh Weissbock
    I am trying to do this in one query. I asked a similar question a few days ago but my personal requirements have changed. I have a game type website where users can attend "classes". There are three tables in my DB. I am using MySQL. I have four tables: hl_classes (int id, int professor, varchar class, text description) hl_classes_lessons (int id, int class_id, varchar lessonTitle, varchar lexiconLink, text lessonData) hl_classes_answers (int id, int lesson_id, int student, text submit_answer, int percent) hl_classes stores all of the classes on the website. The lessons are the individual lessons for each class. A class can have infinite lessons. Each lesson is available in a specific term. hl_classes_terms stores a list of all the terms and the current term has the field active = '1'. When a user submits their answers to a lesson it is stored in hl_classes_answers. A user can only answer each lesson once. Lessons have to be answered sequentially. All users attend all "classes". What I am trying to do is grab the next lesson for each user to do in each class. When the users start they are in term 1. When they complete all 10 lessons in each class they move on to term 2. When they finish lesson 20 for each class they move on to term 3. Let's say we know the term the user is in by the PHP variable $term. So this is my query I am currently trying to massage out but it doesn't work. Specifically because of the hC.id is unknown in the WHERE clause SELECT hC.id, hC.class, (SELECT MIN(output.id) as nextLessonID FROM ( SELECT id, class_id FROM hl_classes_lessons hL WHERE hL.class_id = hC.id ORDER BY hL.id LIMIT $term,10 ) as output WHERE output.id NOT IN (SELECT lesson_id FROM hl_classes_answers WHERE student = $USER_ID)) as nextLessonID FROM hl_classes hC My logic behind this query is first to For each class; select all of the lessons in the term the current user is in. From this sort out the lessons the user has already done and grab the MINIMUM id of the lessons yet to be done. This will be the lesson the user has to do. I hope I have made my question clear enough.

    Read the article

  • TooManyRowsAffectedException with encrypted triggers

    - by Jon Masters
    I'm using nHibernate to update 2 columns in a table that has 3 encrypted triggers on it. The triggers are not owned by me and I can not make changes to them, so unfortunately I can't SET NOCOUNT ON inside of them. Is there another way to get around the TooManyRowsAffectedException that is thrown on commit? Update 1 So far only way I've gotten around the issue is to step around the .Save routine with var query = session.CreateSQLQuery("update Orders set Notes = :Notes, Status = :Status where OrderId = :Order"); query.SetString("Notes", orderHeader.Notes); query.SetString("Status", orderHeader.OrderStatus); query.SetInt32("Order", orderHeader.OrderHeaderId); query.ExecuteUpdate(); It feels dirty and is not easily to extend, but it doesn't crater.

    Read the article

  • Best way to store sales tax information

    - by Seph
    When designing a stock management database system (sales / purchases) what would be the best way to store the various taxes and other such amounts? A few of the fields that could be saved are: Unit price excluding tax Unit price including tax Tax per item Total excluding tax (rounded to 2 decimals) Total including tax (rounded to 2 decimals) Total tax (rounded to 2 decimals) Currently the most reasonable solution so far is storing down (roughly) item, quantity, total excluding tax (rounded) and the total tax (rounded). Can anyone suggest some better way of storing this details for a generic system? Also, given the system needs to be robust, what should be done if there were multiple tax values (eg: state and city) which might need to be separated, in this case a separate table would be in order, but would it be considered excessive to just have a rowID and some taxID mapping to a totalTax column?

    Read the article

  • PIVOT / UNPIVOT IN SQL 2008

    - by Nev_Rahd
    Hello I got child / parent tables as below. MasterTable: MasterID, Description ChildTable ChildID, MasterID, Description. Using PIVOT / UNPIVOT how can i get result as below in single row. if (MasterID : 1 got x child records) MasterID, ChildID1, Description1, ChildID2, Description2....... ChildIDx, Descriptionx Thanks

    Read the article

  • automatic id generation which is a primary key

    - by abhi
    in vb.net while entering a new entry i want to assign a record a unique id like in case of numeric i do this way Dim ItemID As Integer KAYAReqConn.Open() SQLCmd = New SqlCommand("SELECT ISNULL(MAX(ItemID),0) AS ItemID from MstItem", ReqConn) Dim dr As SqlDataReader dr = SQLCmd.ExecuteReader If dr.HasRows Then dr.Read() ItemID = dr("ItemID") + 1 End If dr.Close() in this case m using itemid as a unique id and the format is 1,2,3... and m finding out the max and assigning to a new record but how to assign if the previous id is of the a00001,a00002,a00003,a00004...so on. how i do i produce a unique id in this case

    Read the article

  • Group keywords by site

    - by Skudd
    I am finding a lot of useful help here today, and I really appreciate it. This should be the last one for the day: I have a list of the top 10 keywords per site, sorted by visits, by date. The records need to be sorted as follows (excuse the formatting): 2010-05 2010-04 site1.com keyword1 apples wine keyword1 visits 100 12 keyword2 oranges water keyword2 visits 99 10 site2.com keyword1 blueberry cornbread keyword1 visits 90 100 keyword2 squares biscuits keyword2 visits 80 99 Basically what I need to accomplish involves grouping, but I can't seem to figure it out. Am I heading down the right path, or is there another way to achieve this, or is it just impossible?

    Read the article

  • Re-indexing table; update with from

    - by David Thorisson
    The query says it all, I can't find out the right syntax without without using a for..next UPDATE Webtree SET Webtree.Sorting=w2.Sorting FROM ( SELECT BranchID, CASE WHEN @Index>=ROW_NUMBER() OVER(ORDER BY Sorting ASC) THEN ROW_NUMBER() OVER(ORDER BY Sorting ASC) ELSE ROW_NUMBER() OVER(ORDER BY Sorting ASC)+1 END AS Sorting FROM Webtree w2 WHERE w2.ParentID=@ParentID ) WHERE Webtree.BranchID=w2.BranchID

    Read the article

  • Why would I do an inner join on a non-distinct field?

    - by froadie
    I just came across a query that does an inner join on a non-distinct field. I've never seen this before and I'm a little confused about this usage. Something like: SELECT distinct all, my, stuff FROM myTable INNER JOIN myOtherTable ON myTable.nonDistinctField = myOtherTable.nonDistinctField (WHERE some filters here...) I'm not quite sure what my question is or how to phrase it, or why exactly this confuses me, but I was wondering if anyone could explain why someone would need to do an inner join on a non-distinct field and then select only distinct values...? Is there ever a legitimate use of an inner join on a non-distinct field? What would be the purpose? And if there's is a legitimate reason for such a query, can you give examples of where it would be used?

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • Chain LINQ IQueryable, and end with Stored Procedure

    - by Alex
    I'm chaining search criteria in my application through IQueryable extension methods, e.g.: public static IQueryable<Fish> AtAge (this IQueryable<Fish> fish, Int32 age) { return fish.Where(f => f.Age == age); } However, I also have a full text search stored procedure: CREATE PROCEDURE [dbo].[Fishes_FullTextSearch] @searchtext nvarchar(4000), @limitcount int AS SELECT Fishes.* FROM Fishes INNER JOIN CONTAINSTABLE(Fishes, *, @searchtext, @limitcount) AS KEY_TBL ON Fishes.Id = KEY_TBL.[KEY] ORDER BY KEY_TBL.[Rank] The stored procedure obviously doesn't return IQueryable, however, is it possible to somehow limit the result set for the stored procedure using IQueryable's? I'm envisioning something like .AtAge(5).AboveWeight(100).Fishes_FulltextSearch("abc"). In this case, the fulltext search should execute on a smaller subset of my Fishes table (narrowed by Age and Weight). Is something like this possible? Sample code?

    Read the article

  • When is referential integrity not appropriate?

    - by Curtis Inderwiesche
    I understand the need to have referential integrity for limiting specific values on entry or possibly preventing them from removal upon a request of deletion. However, I am unclear as to a valid use case which would exclude this mechanism from always being used. I guess this would fall into several sub-questions: When is referential integrity not appropriate? Is it appropriate to have fields containing multiple and/or possibly incomplete subsets of a foreign key's list? Typically, should this be a schema structure design decision or an interface design decision? (Or possibly neither or both) Thoughts?

    Read the article

  • Generic Method to find the tuples used for computation in Postgres?

    - by Rahul
    If I have a table col1 | name | pay ------+------------------+------ 1 | Steve Jobs | 1006 2 | Mike Markkula | 1007 3 | Mike Scott | 1978 4 | John Sculley | 1983 5 | Michael Spindler | 1653 The user executes a sum query which sums the pay of people getting paid more than $1500. Is there a way to also implicitly know which tuples have been used which satisfy the condition for sum ? I know you can separately write another query to just return the primary key ids which satisfy the condition. But, Is there any other way to do that in the same query ? probably rewrite the query in some way ? or... any suggestion ?

    Read the article

  • Should I include user_id in multiple tables?

    - by Drarok
    I'm at the planning stages of a multi-user application where each user will only have access their own data. There'll be a few tables that relate to each other, so I could use JOINs to ensure they're accessing only their data, but should I include user_id in each table? Would this be faster? It would certainly make some of the queries easier in the long run. Thanks!

    Read the article

  • What resources will help me understand the fundamentals of Relational Database Systems.

    - by Rachel
    This are few of the fundamental database questions which has always given me trouble. I have tried using google and wiki but I somehow I miss out on understanding the functionality rather than terminology. If possible would really appreciate if someone can share more insights on this questions using some visual representative examples. What is a key? A candidate key? A primary key? An alternate key? A foreign key? What is an index and how does it help your database? What are the data types available and when to use which ones?

    Read the article

  • Using Linq2Sql to insert data into multiple tables using an auto incremented primary key

    - by Thomas
    I have a Customer table with a Primary key (int auto increment) and an Address table with a foreign key to the Customer table. I am trying to insert both rows into the database in one nice transaction. using (DatabaseDataContext db = new DatabaseDataContext()) { Customer newCustomer = new Customer() { Email = customer.Email }; Address b = new Address() { CustomerID = newCustomer.CustomerID, Address1 = billingAddress.Address1 }; db.Customers.InsertOnSubmit(newCustomer); db.Addresses.InsertOnSubmit(b); db.SubmitChanges(); } When I run this I was hoping that the Customer and Address table automatically had the correct keys in the database since the context knows this is an auto incremented key and will do two inserts with the right key in both tables. The only way I can get this to work would be to do SubmitChanges() on the Customer object first then create the address and do SubmitChanges() on that as well. This would create two roundtrips to the database and I would like to see if I can do this in one transaction. Is it possible? Thanks

    Read the article

  • why this left join query failed to load all the data in left table ?

    - by lzyy
    users table +-----+-----------+ | id | username | +-----+-----------+ | 1 | tom | | 2 | jelly | | 3 | foo | | 4 | bar | +-----+-----------+ groups table +----+---------+-----------------------------+ | id | user_id | title | +----+---------+-----------------------------+ | 2 | 1 | title 1 | | 4 | 1 | title 2 | +----+---------+-----------------------------+ the query SELECT users.username,users.id,count(groups.title) as group_count FROM users LEFT JOIN groups ON users.id = groups.user_id result +----------+----+-------------+ | username | id | group_count | +----------+----+-------------+ | tom | 1 | 2 | +----------+----+-------------+ where is the rest users' info? the result is the same as inner join , shouldn't left join return all left table's data? PS:I'm using mysql

    Read the article

  • MySQL Select problems

    - by John Nuñez
    Table #1: qa_returns_items Table #2: qa_returns_residues I have a long time trying to get this Result: item_code - item_quantity 2 - 1 3 - 2 IF qa_returns_items.item_code = qa_returns_residues.item_code AND status_code = 11 THEN item_quantity = qa_returns_items.item_quantity - qa_returns_residues.item_quantity ELSEIF qa_returns_items.item_code = qa_returns_residues.item_code AND status_code = 12 THEN item_quantity = qa_returns_items.item_quantity + qa_returns_residues.item_quantity ELSE show diferendes END IF I tried this Query: select SubQueryAlias.item_code, total_ecuation, SubQueryAlias.item_unitprice, SubQueryAlias.item_unitprice * total_ecuation as item_subtotal, item_discount, (SubQueryAlias.item_unitprice * total_ecuation) - item_discount as item_total from ( select ri.item_code , case status_code when 11 then ri.item_quantity - rr.item_quantity when 12 then ri.item_quantity + rr.item_quantity end as total_ecuation , rr.item_unitprice , rr.item_quantity , rr.item_discount * rr.item_quantity as item_discount from qa_returns_residues rr left join qa_returns_items ri on ri.item_code = rr.item_code WHERE ri.returnlog_code = 1 ) as SubQueryAlias where total_ecuation > 0 GROUP BY (item_code); The query returns this result: item_code - item_quantity 1 - 2 2 - 2

    Read the article

  • Query to find all the nodes that are two steps away from a particular node.

    - by iecut
    Suppose I have two columns in a table that represents a graph, the first column is a FROMNODE and second one is TONODE. What I would like to know is that how will we find all the nodes that are two steps away from a particular node. Lets suppose I have a node numbered '1' and i would like to know all the nodes that are two steps away from it. I have tried(I am assuming the table name as graph) SELECT FROMNODE FROM GRAPH WHERE TONODE=1 (this is to select all the nodes that are connected to node 1, but I couldn't figure out how would I find all the nodes that are two steps away from node 1??)

    Read the article

  • LINQtoSQL Custom Constructor off Partial Class?

    - by sah302
    Hi all, I read this question here: http://stackoverflow.com/questions/82409/is-there-a-way-to-override-the-empty-constructor-in-a-class-generated-by-linqtosq Typically my constructor would look like: public User(String username, String password, String email, DateTime birthday, Char gender) { this.Id = Guid.NewGuid(); this.DateCreated = this.DateModified = DateTime.Now; this.Username = username; this.Password = password; this.Email = email; this.Birthday = birthday; this.Gender = gender; } However, as read in that question, you want to use partial method OnCreated() instead to assign values and not overwrite the default constructor. Okay so I got this : partial void OnCreated() { this.Id = Guid.NewGuid(); this.DateCreated = this.DateModified = DateTime.Now; this.Username = username; this.Password = password; this.Email = email; this.Birthday = birthday; this.Gender = gender; } However, this gives me two errors: Partial Methods must be declared private. Partial Methods must have empty method bodies. Alright I change it to Private Sub OnCreated() to remove both of those errors. However I am still stuck with...how can I pass it values as I would with a normal custom constructor? Also I am doing this in VB (converted it since I know most know/prefer C#), so would that have an affect on this?

    Read the article

  • I'm trying to handle the updates on 2 related tables in one DetailsView using Jquery and Linq, and h

    - by Ben Reisner
    Given two related tables (reports, report_fields) where there can be many report_fields entries for each reports entry, I need to allow the user to enter new report_fields, delete existing report_fields, and re-arrange the order. Currently I am using a DetailsView to handle the editing of the reports. I added some logic to handle report_fields, and currently it allows you to succesfully re-arrange the order, but i'm a little stumped as to the best way to add new items, or delete existing items. The basic logic I have is that each report_fields is represented by a . It has a description as the text, and a field for each field in the report_fields table. I use JQuery Sortable to allow the user to re-arrange the LIs. Abbreviated Create Table Statements:(foreign key constraint ignored for brevity) create table report( id integer primary key identity, reportname varchar(250) ) create table report_fields( id integer primary key identity, reportID integer, keyname integer, keyvalue integer, field_order integer ) My abbreviated markup: <asp:DetailsView ...> ... <asp:TemplateField HeaderText="Fields"> <EditItemTemplate> <ul class="MySortable"> <asp:Repeater ID="Repeater1" runat="server" DataSource='<%# Eval("report_fields") %>'> <ItemTemplate> <li> <%# Eval("keyname") %>: <%# Eval("keyvalue") %> <input type="hidden" name="keyname[]" value='<%# Eval("keyname") %>' /> <input type="hidden" name="keyvalue[]" value='<%# Eval("keyvalue") %>' /> </li> </ItemTemplate> </asp:Repeater> </ul> </EditItemTemplate> </asp:TemplateField> </asp:DetailsView> <asp:LinqDataSource ID="LinqDataSource2" onupdating="LinqDataSource2_Updating" table=reports ... /> $(function() { $(".MySortable").sortable({ placeholder: 'MySortable-highlight' }).disableSelection(); }); Code Behind Class: public partial class Administration_AddEditReport protected void LinqDataSource2_Updating(object sender, LinqDataSourceUpdateEventArgs e) { report r = (report)e.NewObject; MyDataContext dc = new MyDataContext(); var fields = from f in dc.report_fields where f.reportID == r.id select f; dc.report_fields.DeleteAllOnSubmit(fields); NameValueCollection nvc = Request.Params; string[] keyname = nvc["keyname[]"].Split(','); string[] keyvalue = nvc["keyvalue[]"].Split(','); for (int i = 0; i < keyname.Length; i++) { report_field rf = new report_field(); rf.reportID = r.id; rf.keyname = keyname[i]; rf.keyvalue = keyvalue[i]; rf.field_order = i; dc.report_fields.InsertOnSubmit(rf); } dc.SubmitChanges(); } }

    Read the article

  • Obtaining Current Fiscal Year from Hierarchy with MDX

    - by Robert Iver
    I'm building a report in Reporting Services 2005 based on a SSAS 2005 cube. The basic idea of the report is that they want to see sales for this fiscal year to date vs. last year sales year to date. It sounds simple, but being that this is my first "real" report based on SSAS, I'm having a hell of a time. First, how would one calculate the current fiscal year, quarter, or month. I have a Fiscal Date Hierarchy with all that information in it, but I can't figure out how to say: "Based on today's date, find the current fiscal year, quarter, and month." My second, but slightly smaller problem, is getting last years sales vs. this years sales. I have seen MANY examples on how to do this, but they all assume that you select the date manually. Since this is a report and will run pretty much on it's own, I need a way to insert the "current" fiscal year, quarter, and month into the PERIODSTODATE or PARALLELPERIOD functions to get what I want. So, I'm begging for your help on this one. I have to have this report done by tomorrow monring and I'm beginning to freak out a bit. Thanks in advance.

    Read the article

  • SQL Concurrent test update question

    - by ptoinson
    Howdy Folks, I have a SQLServer 2008 database in which I have a table for Tags. A tag is just an id and a name. The definition of the tags table looks like: CREATE TABLE [dbo].[Tag]( [ID] [int] IDENTITY(1,1) NOT NULL, [Name] [varchar](255) NOT NULL CONSTRAINT [PK_Tag] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ) Name is also a unique index. further I have several processes adding data to this table at a pretty rapid rate. These processes use a stored proc that looks like: ALTER PROC [dbo].[lg_Tag_Insert] @Name varchar(255) AS DECLARE @ID int SET @ID = (select ID from Tag where Name=@Name ) if @ID is null begin INSERT Tag(Name) VALUES (@Name) RETURN SCOPE_IDENTITY() end else begin return @ID end My issues is that, other than being a novice at concurrent database design, there seems to be a race condition that is causing me to occasionally get an error that I'm trying to enter duplicate keys (Name) into the DB. The error is: Cannot insert duplicate key row in object 'dbo.Tag' with unique index 'IX_Tag_Name'. This makes sense, I'm just not sure how to fix this. If it where code I would know how to lock the right areas. SQLServer is quite a different beast. First question is what is the proper way to code this 'check, then update pattern'? It seems I need to get an exclusive lock on the row during the check, rather than a shared lock, but it's not clear to me the best way to do that. Any help in the right direction will be greatly appreciated. Thanks in advance.

    Read the article

  • mysql query for getting all messages that belong to user's contacts

    - by aharon
    So I have a database that is setup sort of like this (simplified, and in terms of the tables, all are InnoDBs): Users: contains based user authentication information (uid, username, encrypted password, et cetera) Contacts: contains two rows per relationship that exists between users as (uid1, uid2), (uid2, uid1) to allow for a good 1:1 relationship (must be mutual) between users Messages: has messages that consist of a blob, owner-id, message-id (auto_increment) So my question is, what's the best MySQL query to get all messages that belong to all the contacts of a specific user? Is there an efficient way to do this?

    Read the article

  • MySQL slow query

    - by andrhamm
    SELECT items.item_id, items.category_id, items.title, items.description, items.quality, items.type, items.status, items.price, items.posted, items.modified, zip_code.state_prefix, zip_code.city, books.isbn13, books.isbn10, books.authors, books.publisher FROM ( ( items LEFT JOIN bookitems ON items.item_id = bookitems.item_id ) LEFT JOIN books ON books.isbn13 = bookitems.isbn13 ) LEFT JOIN zip_code ON zip_code.zip_code = items.item_zip WHERE items.rid = $rid` I am running this query to get the list of a user's items and their location. The zip_code table has over 40k records and this might be the issue. It currently takes up to 15 seconds to return a list of about 20 items! What can I do to make this query more efficient?

    Read the article

  • Calling sp and Performance strategy.

    - by Costa
    Hi I find my self in a situation where I have to choose between either creating a new sp in database and create the middle layer code. so loose some precious development time. also the procedure is likely to contain some joins. Or use two existing sp(s), the problem of this approach is that I am doing two round trips to database. which can be poor performance especially if I have database in another server. Which approach you will go?, and why? thanks

    Read the article

< Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >