Search Results

Search found 36186 results on 1448 pages for 'sql 11'.

Page 363/1448 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • switch case in where clause

    - by Nimesh
    hi, i need to check three conditions: if @filter = 1 { **select * from employeestable where rating is not null** } else if @filter = 2 { **select * from employeestable where rating is null** } else { **select * from employeestable** } This i need to do using a case statement. now i have more than 30 lines of query, if i use case i can reduce my code upto 70% Please let mek now how can i do this.

    Read the article

  • add ANOTHER primary key to a table which is UNIQUE

    - by gdubs
    so im having problems with adding another primary key to my table. i have 3 columns: 1. Account ID (Identity) 2. EmailID 3. Data field when i made the table i had this to make the Account ID and the Email ID unique PRIMARY KEY (AccountID, EmailID) i thought that would make my emailid unique, but then after i tried inserting another row with the same emailid it went through. so i thought i missed something out. now for my question: IF, i had to use alter, How do i alter the table/PK Constraint to modify the EmailID field and make it Unique IF i decided to drop the table and made a new one, how do i make those two primary keys uniqe? Thanks a bunch!!

    Read the article

  • SQL - Finding continuous entries of a given size.

    - by ByteMR
    I am working on a system for reserving seats. A user inputs how many seats they wish to reserve and the database will return a set of suggested seats that are not previously reserved that matches the number of seats being reserved. For instance if I had the table: SeatID | Reserved ----------------- 1 | false 2 | true 3 | false 4 | false 5 | false 6 | true 7 | true 8 | false 9 | false 10 | true And the user inputs that they wish to reserve 2 seats, I would expect the query to return that seats (3, 4), (4, 5), and (8, 9) are not reserved and match the given number of input seats. Seats are organized into sections and rows. Continuous seats must be in the same row. How would I go about structuring this query to work in such a way that it finds all available continuous seats that match the given input?

    Read the article

  • SQL - How to join on similar (not exact) columns

    - by BlueRaja
    I have two tables which get updated at almost the exact same time - I need to join on the datetime column. I've tried this: SELECT * FROM A, B WHERE ABS(DATEDIFF(second, A.Date_Time, B.Date_Time) = ( SELECT MIN(ABS(DATEDIFF(second, A.Date_Time, B2.Date_Time))) FROM B AS B2 ) But it tells me: Multiple columns are specified in an aggregated expression containing an outer reference. If an expression being aggregated contains an outer reference, then that outer reference must be the only column referenced in the expression. How can I join these tables?

    Read the article

  • SQL - How to select a row having a column with max value

    - by Abhi
    date value 18/5/2010, 1 pm 40 18/5/2010, 2 pm 20 18/5/2010, 3 pm 60 18/5/2010, 4 pm 30 18/5/2010, 5 pm 60 18/5/2010, 6 pm 25 i need to query for the row having max(value)(i.e. 60). So, here we get two rows. From that, I need the row with the lowest time stamp for that day(i.e 18/5/2010, 3 pm - 60)

    Read the article

  • Linq to sql C# updating reference Tables

    - by Laurence Burke
    ok reclarification I am adding a new address and I know the structure as AddressID = PK and all other entities are non nullable. Now on insert of a new row the addrID Pk is autogened and I am wondering if I would have to get that to create a new row in the referencing table or does that automatically get generated also. also I want to be able to repopulate the dropdownlist that lists the current employee's addresses with the newly created address. static uint _curEmpID; protected void btnAdd_Click(object sender, EventArgs e) { if (txtZip.Text != "" && txtAdd1.Text != "" && txtCity.Text != "") { TestDataClassDataContext dc = new TestDataClassDataContext(); Address addr = new Address() { AddressLine1 = txtAdd1.Text, AddressLine2 = txtAdd2.Text, City = txtCity.Text, PostalCode = txtZip.Text, StateProvinceID = Convert.ToInt32(ddlState.SelectedValue) }; dc.Addresses.InsertOnSubmit(addr); lblSuccess.Visible = true; lblErrMsg.Visible = false; dc.SubmitChanges(); // // TODO: add reference from new address to CurEmp Table // SetAddrList(); } else { lblErrMsg.Text = "Invalid Input"; lblErrMsg.Visible = true; } } protected void ddlAddList_SelectedIndexChanged(object sender, EventArgs e) { lblErrMsg.Visible = false; lblSuccess.Visible = false; TestDataClassDataContext dc = new TestDataClassDataContext(); dc.ObjectTrackingEnabled = false; if (ddlAddList.SelectedValue != "-1") { var addr = (from a in dc.Addresses where a.AddressID == Convert.ToInt32(ddlAddList.SelectedValue) select a).FirstOrDefault(); txtAdd1.Text = addr.AddressLine1; txtAdd2.Text = addr.AddressLine2; txtCity.Text = addr.City; txtZip.Text = addr.PostalCode; ddlState.SelectedValue = addr.StateProvinceID.ToString(); btnSubmit.Visible = true; btnAdd.Visible = false; } else { txtAdd1.Text = ""; txtAdd2.Text = ""; txtCity.Text = ""; txtZip.Text = ""; btnAdd.Visible = true; btnSubmit.Visible = false; } } protected void SetAddrList() { TestDataClassDataContext dc = new TestDataClassDataContext(); dc.ObjectTrackingEnabled = false; var addList = from addr in dc.Addresses from eaddr in dc.EmployeeAddresses where eaddr.EmployeeID == _curEmpID && addr.AddressID == eaddr.AddressID select new { AddValue = addr.AddressID, AddText = addr.AddressID, }; ddlAddList.DataSource = addList; ddlAddList.DataValueField = "AddValue"; ddlAddList.DataTextField = "AddText"; ddlAddList.DataBind(); ddlAddList.Items.Add(new ListItem("<Add Address>", "-1")); } OK I am hoping that I did not include too much code. I would really appreciate any other comments about I could otherwise improve this code in any other ways also.

    Read the article

  • PL/SQL Sum by hour

    - by Steve
    Hi, I have some data with start and stop date that I need to sum. I am not sure how to code for it. Here are is the data I have to use: STARTTIME,STOPTIME,EVENTCAPACITY 8/12/2009 1:15:00 PM,8/12/2009 1:59:59 PM,100 8/12/2009 2:00:00 PM,8/12/2009 2:29:59 PM,100 8/12/2009 2:30:00 PM,8/12/2009 2:59:59 PM,80 8/12/2009 3:00:00 PM,8/12/2009 3:59:59 PM,85 In this example I would need the sum from 1pm to 2pm, 2pm to 3pm and 3pm to 4pm Any suggestions are appreciated. Steve

    Read the article

  • sql query question

    - by bu0489
    hey guys, just having a bit of difficulty with a query, i'm trying to figure out how to show the most popular naturopath that has been visited in a centre. My tables look as follows; Patient(patientId, name, gender, DoB, address, state,postcode, homePhone, businessPhone, maritalStatus, occupation, duration,unit, race, registrationDate , GPNo, NaturopathNo) and Naturopath (NaturopathNo, name, contactNo, officeStartTime, officeEndTime, emailAddress) now to query this i've come up with SELECT count(*), naturopathno FROM dbf10.patient WHERE naturopathno != 'NULL' GROUP BY naturopathno; which results in; COUNT(*) NATUROPATH 2 NP5 1 NP6 3 NP2 1 NP1 2 NP3 1 NP7 2 NP8 My question is, how would I go about selecting the highest count from this list, and printing that value with the naturopaths name? Any suggestions are very welcome, Brad

    Read the article

  • Sql query - selecting top 5 rows and further selecting rows only if User is present

    - by Gublooo
    Hello, I kind of stuck on how to implement this query - this is pretty similar to the query I posted earlier but I'm not able to crack it. I have a shopping table where everytime a user buys anything, a record is inserted. Some of the fields are * shopping_id (primary key) * store_id * user_id Now what I need is to pull only the list of those stores where he's among the top 5 visitors: When I break it down - this is what I want to accomplish: * Find all stores where this UserA has visited * For each of these stores - see who the top 5 visitors are. * Select the store only if UserA is among the top 5 visitors. The corresponding queries would be: select store_id from shopping where user_id = xxx select user_id,count(*) as 'visits' from shopping where store_id in (select store_id from shopping where user_id = xxx) group by user_id order by visits desc limit 5 Now I need to check in this resultset if UserA is present and select that store only if he's present. For example if he has visited a store 5 times - but if there are 5 or more people who have visited that store more than 5 times - then that store should not be selected. So I'm kind of lost here. Thanks for your help

    Read the article

  • Is READ UNCOMMITTED / NOLOCK safe in this situation?

    - by Ben Challenor
    I know that snapshot isolation would fix this problem, but I'm wondering if NOLOCK is safe in this specific case so that I can avoid the overhead. I have a table that looks something like this: drop table Data create table Data ( Id BIGINT NOT NULL, Date BIGINT NOT NULL, Value BIGINT, constraint Cx primary key (Date, Id) ) create nonclustered index Ix on Data (Id, Date) There are no updates to the table, ever. Deletes can occur but they should never contend with the SELECT because they affect the other, older end of the table. Inserts are regular and page splits to the (Id, Date) index are extremely common. I have a deadlock situation between a standard INSERT and a SELECT that looks like this: select top 1 Date, Value from Data where Id = @p0 order by Date desc because the INSERT acquires a lock on Cx (Date, Id; Value) and then Ix (Id, Date), but the SELECT acquires a lock on Ix (Id, Date) and then Cx (Date, Id; Value). This is because the SELECT first seeks on Ix and then joins to a seek on Cx. Swapping the clustered and non-clustered index would break this cycle, but it is not an acceptable solution because it would introduce cycles with other (more complex) SELECTs. If I add NOLOCK to the SELECT, can it go wrong in this case? Can it return: More than one row, even though I asked for TOP 1? No rows, even though one exists and has been committed? Worst of all, a row that doesn't satisfy the WHERE clause? I've done a lot of reading about this online, but the only reproductions of over- or under-count anomalies I've seen (one, two) involve a scan. This involves only seeks. Jeff Atwood has a post about using NOLOCK that generated a good discussion. I was particularly interested in a comment by Rick Townsend: Secondly, if you read dirty data, the risk you run is of reading the entirely wrong row. For example, if your select reads an index to find your row, then the update changes the location of the rows (e.g.: due to a page split or an update to the clustered index), when your select goes to read the actual data row, it's either no longer there, or a different row altogether! Is this possible with inserts only, and no updates? If so, then I guess even my seeks on an insert-only table could be dangerous. Update: I'm trying to figure out how snapshot isolation works. It seems to be row-based, where transactions read the table (with no shared lock!), find the row they are interested in, and then see if they need to get an old version of the row from the version store in tempdb. But in my case, no row will have more than one version, so the version store seems rather pointless. And if the row was found with no shared lock, how is it different to just using NOLOCK?

    Read the article

  • What to do with syncobj in SQL Server

    - by hgulyan
    Hi. I run this script to search particular text in sys.columns and I get a lot of "dbo.syncobj_0x3934443438443332" this kind of result. SELECT c.name, s.name + '.' + o.name FROM sys.columns c INNER JOIN sys.objects o ON c.object_id=o.object_id INNER JOIN sys.schemas s ON o.schema_id=s.schema_id WHERE c.name LIKE '%text%' If I get it right, they are replication objects. Is it so? Can i just throw them away from my query just like o.name NOT LIKE '%syncobj%' or there's another way? Thank you.

    Read the article

  • Why better isolation level means better performance in SQL Server

    - by Oleg Zhylin
    When measuring performance on my query I came up with a dependency between isolation level and elapsed time that was surprising to me READUNCOMMITTED - 409024 READCOMMITTED - 368021 REPEATABLEREAD - 358019 SERIALIZABLE - 348019 Left column is table hint, and the right column is elapsed time in microseconds (sys.dm_exec_query_stats.total_elapsed_time). Why better isolation level gives better performance? This is a development machine and no concurrency whatsoever happens. I would expect READUNCOMMITTED to be the fasted due to less locking overhead. Update: I did measure this with DBCC DROPCLEANBUFFERS DBCC FREEPROCCACHE issued and Profiler confirms there're no cache hits happening. Update2: The query in question is an OLAP one and we need to run it as fast as possible. Closing the production server from outside world to get the computation done is not out of question if this gives performance benefits.

    Read the article

  • SQL Query Returning Duplicate Results

    - by Jesse Bunch
    Hi, I've been working out this query now for a while and I thought I had it where I wanted it, but apparently not. There are two records in the database (orders). The query should return two different rows, but instead returns two rows that have exactly the same values. I think it may be something to do with the GROUP BY or derived tables I'm using but my eyes are tired and not seeing the problem. Can any of you help? Thanks in advance. SELECT orders.billerID, orders.invoiceDate, orders.txnID, orders.bName, orders.bStreet1, orders.bStreet2, orders.bCity, orders.bState, orders.bZip, orders.bCountry, orders.sName, orders.sStreet1, orders.sStreet2, orders.sCity, orders.sState, orders.sZip, orders.sCountry, orders.paymentType, orders.invoiceNotes, orders.pFee, orders.shipping, orders.tax, orders.reasonCode, orders.txnType, orders.customerID, customers.firstName AS firstName, customers.lastName AS lastName, customers.businessName AS businessName, orderStatus.statusName AS orderStatus, IFNULL(orderItems.itemTotal, 0.00) + orders.shipping + orders.tax AS orderTotal, IFNULL(orderItems.itemTotal, 0.00) + orders.shipping + orders.tax - IFNULL(payments.totalPayments, 0.00) AS orderBalance FROM orders LEFT JOIN customers ON orders.customerID = customers.id LEFT JOIN orderStatus ON orders.orderStatus = orderStatus.id LEFT JOIN ( SELECT orderItems.orderID, SUM(orderItems.itemPrice * orderItems.itemQuantity) as itemTotal FROM orderItems GROUP BY orderItems.orderID ) orderItems ON orderItems.orderID = orders.id LEFT JOIN ( SELECT payments.orderID, SUM(payments.amount) as totalPayments FROM payments GROUP BY payments.orderID ) payments ON payments.orderID = orders.id

    Read the article

  • Will SQL Server Partitioning increase performance without changing filegroups

    - by Tom
    Scenario I have a 10 million row table. I partition it into 10 partitions, which results in 1 million rows per partition but I do not do anything else (like move the partitions to different file groups or spindles) Will I see a performance increase? Is this in effect like creating 10 smaller tables? If I have queries that perform key lookups or scans, will the performance increase as if they were operating against a much smaller table? I'm trying to understand how partitioning is different from just having a well indexed table, and where it can be used to improve performance. Would a better scenario be to move the old data (using partition switching) out of the primary table to a read only archive table? Is having a table with a 1 million row partition and a 9 million row partition analagous (performance wise) to moving the 9 million rows to another table and leaving only 1 million rows in the original table?

    Read the article

  • How to split a single column values to multiple column values?

    - by Shahsra
    Hi All, I have a problem splitting single column values to multiple column values. For Example: **Name** abcd efgh ijk lmn opq asd j. asdjja asb (asdfas) asd asd and I need the output something like this: first_name last_name abcd efgh ijk opq asd asdjja asb asd asd null The middle name can be omitted (no need for a middle name) The columns are already created and need to insert the data from that single 'Name' column. Thanks a lot, Shahsra

    Read the article

  • LINQ to SQL repository - caching data

    - by creativeincode
    I have built my first MVC solution and used the repository pattern for retrieving/inserting/updating my database. I am now in the process of refactoring and I've noticed that a lot of (in fact all) the methods within my repository are hitting the database everytime. This seems overkill and what I'd ideally like is to do is 'cache' the main data object e.g. 'GetAllAdverts' from the database and to then query against this cached object for things like 'FindAdvert(id), AddAdvert(), DeleteAdvert() etc..' I'd also need to consider updating/deleting/adding records to this cache object and the database. What is the best apporoach for something like this? My knowledge of this type of things is minimal and really looking for advice/guidance/tutorial to point me in the right direction. Thanks in advance.

    Read the article

  • Problem with Full text Searching

    - by devendra
    I am searching in resumes weather the word is exist or not i am using the below query Case1: select top(10) c_resume_text from sntbl_candidates where contains(c_resume_text,'"a/dm"') in the above example only it is not working properly .It showing resumes even though there is no text like that. In Messages i am getting the following message. Informational: The full-text search condition contained noise word(s). if i try with Case 2: select top(10) c_resume_text from sntbl_candidates where contains(c_resume_text,'"a/d') i am getting proper results in case 2 can any one suggest me what to do. Thanks

    Read the article

  • More FP-correct way to create an update sql query

    - by James Black
    I am working on access a database using F# and my initial attempt at creating a function to create the update query is flawed. let BuildUserUpdateQuery (oldUser:UserType) (newUser:UserType) = let buf = new System.Text.StringBuilder("UPDATE users SET "); if (oldUser.FirstName.Equals(newUser.FirstName) = false) then buf.Append("SET first_name='").Append(newUser.FirstName).Append("'" ) |> ignore if (oldUser.LastName.Equals(newUser.LastName) = false) then buf.Append("SET last_name='").Append(newUser.LastName).Append("'" ) |> ignore if (oldUser.UserName.Equals(newUser.UserName) = false) then buf.Append("SET username='").Append(newUser.UserName).Append("'" ) |> ignore buf.Append(" WHERE id=").Append(newUser.Id).ToString() This doesn't properly put a , between any update parts after the first, for example: UPDATE users SET first_name='Firstname', last_name='lastname' WHERE id=... I could put in a mutable variable to keep track when the first part of the set clause is appended, but that seems wrong. I could just create an list of tuples, where each tuple is oldtext, newtext, columnname, so that I could then loop through the list and build up the query, but it seems that I should be passing in a StringBuilder to a recursive function, returning back a boolean which is then passed as a parameter to the recursive function. Does this seem to be the best approach, or is there a better one?

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >