Search Results

Search found 27497 results on 1100 pages for 'sql joke'.

Page 342/1100 | < Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >

  • Cannot resolve collation conflict in Union select

    - by phenevo
    Hi, I've got tqo queries: First doesn't work: select hotels.TargetCode as TargetCode from hotels union all select DuplicatedObjects.duplicatetargetCode as TargetCode from DuplicatedObjects where DuplicatedObjects.objectType=4 because I get error: Cannot resolve collation conflict for column 1 in SELECT statement. Second works: select hotels.Code from hotels where hotels.targetcode is not null union all select DuplicatedObjects.duplicatetargetCode as Code from DuplicatedObjects where DuplicatedObjects.objectType=4 Structure: Hotels.Code -PK nvarchar(40) Hotels.TargetCode - nvarchar(100) DuplicatedObjects.duplicatetargetCode PK nvarchar(100)

    Read the article

  • MongoDB equivalent of SQL "OR"

    - by Matt
    So, MongoDB defaults to "AND" when finding records. For example: db.users.find({age: {'$gte': 30}, {'$lte': 40}}); The above query finds users = 30 AND <= 40 years old. How would I find users <= 30 OR = 40 years old?

    Read the article

  • Test the sequentiality of a column with a single SQL query

    - by LauriE
    Hey, I have a table that contains sets of sequential datasets, like that: ID set_ID some_column n 1 'set-1' 'aaaaaaaaaa' 1 2 'set-1' 'bbbbbbbbbb' 2 3 'set-1' 'cccccccccc' 3 4 'set-2' 'dddddddddd' 1 5 'set-2' 'eeeeeeeeee' 2 6 'set-3' 'ffffffffff' 2 7 'set-3' 'gggggggggg' 1 At the end of a transaction that makes several types of modifications to those rows, I would like to ensure that within a single set, all the values of "n" are still sequential (rollback otherwise). They do not need to be in the same order according to the PK, just sequential, like 1-2-3 or 3-1-2, but not like 1-3-4. Due to the fact that there might be thousands of rows within a single set I would prefer to do it in the db to avoid the overhead of fetching the data just for verification after making some small changes. Also there is the issue of concurrency. The way locking in InnoDB (repeatable read) works (as I understand) is that if I have an index on "n" then InnoDB also locks the "gaps" between values. If I combine set_ID and n to a single index, would that eliminate the problem of phantom rows appearing? Looks to me like a common problem. Any brilliant ideas? Thanks! Note: using MySQL + InnoDB

    Read the article

  • Copy Rows in a One to Many with LINQ to SQL

    - by Refracted Paladin
    I have a table that stores a bunch of diagnosis for a single plan. When the users create a new plan I need to copy over all existing diagnosis's as well. I had thought to try the below but this is obviously not correct. I am guessing that I will need to loop through my oldDiagnosis part, but how? Thanks! My Attempt so far... public static void CopyPlanDiagnosis(int newPlanID, int oldPlanID) { using (var context = McpDataContext.Create()) { var oldDiagnosis = from planDiagnosi in context.tblPlanDiagnosis where planDiagnosi.PlanID == oldPlanID select planDiagnosi; var newDiagnosis = new tblPlanDiagnosi { PlanID = newPlanID, DiagnosisCueID = oldDiagnosis.DiagnosisCueID, DiagnosisOther = oldDiagnosis.DiagnosisOther, AdditionalInfo = oldDiagnosis.AdditionalInfo, rowguid = Guid.NewGuid() }; context.tblPlanDiagnosis.InsertOnSubmit(newDiagnosis); context.SubmitChanges(); } }

    Read the article

  • how can i substitute a NULL value for a 0 in an SQL Query result

    - by Name.IsNullOrEmpty
    SELECT EmployeeMaster.EmpNo, Sum(LeaveApplications.LeaveDaysTaken) AS LeaveDays FROM EmployeeMaster FULL OUTER JOIN LeaveApplications ON EmployeeMaster.id = LeaveApplications.EmployeeRecordID INNER JOIN LeaveMaster ON EmployeeMaster.id = LeaveMaster.EmpRecordID GRoup BY EmployeeMaster.EmpNo order by LeaveDays Desc with the above query, if an employee has no leave application record in table LeaveApplications, then their Sum(LeaveApplications.LeaveDaysTaken) AS LeaveDays column returns NULL. What i would like to do is place a value of 0 (Zero) instead of NULL. I want to do this because i have a calculated column in the same query whose formular depends on the LeaveDays returned and when LeaveDays is NULL, the formular some how fails. Is there away i can put 0 for NULL such that that i can get my desired result.

    Read the article

  • Looping over some selected values in a stored procedure

    - by macca1
    I'm trying to modify a stored procedure hooked into an ORM tool. I want to add a few more rows based on a loop of some distinct values in a column. Here's the current SP: SELECT GRP = STAT_CD, CODE = REASN_CD FROM dbo.STATUS_TABLE WITH (NOLOCK) Order by STAT_CD, SRT_ORDR For each distinct STAT_CD, I'd also like to insert a REASN_CD of "--" here in the SP. However I'd like to do it before the order by so I can give them negative sort orders so they come in at the top of the list. I'm getting tripped up on how to implement this. Does anyone know how to do this for each unique STAT_CD?

    Read the article

  • Performance of inter-database query (between linked servers)

    - by Swoosh
    I have an import between 2 linked servers. I basically got to get the data from a multiple join into a table on my side. The current query is something like this: select a.* from db1.dbo.tbl1 a inner join db1.dbo.tbl2 on ... inner join db1.dbo.tbl3 on ... inner join db1.dbo.tbl4 on ... inner join db2.dbo.myside on ... db1 = linked server db2 = my own database After this one, I am using an insert into + select to add this data in my table which is located in db2. (usually few hundred records - this import running once a minute) My question is related to performance. The tables on the linked server (tbl1, tbl2, tbl3, tbl4) are huge tables, with millions of records, and it is slowing down the import process. I was told that, if I do the join on the "other" side (db1 - linked server) for example in a stored procedure, than, even if the query looks the same, it would run faster. Is that right? This is kinda hard to test. Note that the join contains a table from my database too. Also. are there other "tricks" I could use in order to make this run faster? Thanks

    Read the article

  • Eliminate duplicates in SQL query

    - by ewdef
    i have a table with 6 fields. the columns are ID, new_id price,title,Img,Active. I have datawhich is duplicated for the price column. When I do a select i want to show only distinct rows where new_id is not the same. e.g.- ID New_ID Price Title Img Active 1 1 20.00 PA-1 0X4... 1 2 1 10.00 PA-10 0X4... 1 3 3 20.00 PA-11 0X4... 1 4 4 30.00 PA-5 0X4... 1 5 9 20.00 PA-99A 0X4... 1 6 3 50.00 PA-55 0X4... 1 When the select statement runs, only rows with ID (1,4,9,6) should show. Reason being the new_ID with the higher price should show up. How can i do this?

    Read the article

  • SQL server partitioning

    - by durilai
    I have a table that has millions of records and we are looking at implementing table partitioning. Looking at it we have a foreign key "GroupID" that we would like to partition on. Is this possible? The Group will have more entries added to it, so as new GroupID's are added can the partition's be made dynamically?

    Read the article

  • Round time to 5 minute nearest SQL Server

    - by Drako
    i don't know if it can be usefull to somebody but I went crazy looking for a solution and ended up doing it myself. Here is a function that (according to a date passed as parameter), returns the same date and approximate time to the nearest multiple of 5. It is a slow query, so if anyone has a better solution, it is welcome. A greeting. CREATE FUNCTION [dbo].[RoundTime] (@Time DATETIME) RETURNS DATETIME AS BEGIN DECLARE @min nvarchar(50) DECLARE @val int DECLARE @hour int DECLARE @temp int DECLARE @day datetime DECLARE @date datetime SET @date = CONVERT(DATETIME, @Time, 120) SET @day = (select DATEADD(dd, 0, DATEDIFF(dd, 0, @date))) SET @hour = (select datepart(hour,@date)) SET @min = (select datepart(minute,@date)) IF LEN(@min) > 1 BEGIN SET @val = CAST(substring(@min, 2, 1) as int) END else BEGIN SET @val = CAST(substring(@min, 1, 1) as int) END IF @val <= 2 BEGIN SET @val = CAST(CAST(@min as int) - @val as int) END else BEGIN IF (@val <> 5) BEGIN SET @temp = 5 - CAST(@min%5 as int) SET @val = CAST(CAST(@min as int) + @temp as int) END IF (@val = 60) BEGIN SET @val = 0 SET @hour = @hour + 1 END IF (@hour = 24) BEGIN SET @day = DATEADD(day,1,@day) SET @hour = 0 SET @min = 0 END END RETURN CONVERT(datetime, CAST(DATEPART(YYYY, @day) as nvarchar) + '-' + CAST(DATEPART(MM, @day) as nvarchar) + '-' + CAST(DATEPART(dd, @day) as nvarchar) + ' ' + CAST(@hour as nvarchar) + ':' + CAST(@val as nvarchar), 120) END

    Read the article

  • SQL to have one specific record at the top, all others below

    - by superdario
    Hey all, I am trying to put together a query that will display one specific record (found by the record's primary ID) at the top, and display all other records below it, sorted by date (I have "date_added" as one of the fields in the table, in addition to primary ID). I could do this with a UNION (first select would locate the record I want, and the other select would display all other records), but I'm wondering if is there perhaps a better way? I'm using Oracle, by the way.

    Read the article

  • how to get particular column distinct in linq to sql

    - by kart
    Hi All, Am having columns as category and songs in my table for each category there are almost 10 songs and in total there are 7 category such that which was tabled as category1 songCategory1a category1 songCategory1b category1 songCategory1c --- category2 songCategory2a category2 songCategory2b category2 songCategory2c --- category3 songCategory3a category3 songCategory3b category3 songCategory3c --- like that there is table in that i want to get the result as category1 category2 category3 category4 kindly any one help me , i tried (from s in _context.db_songs select new { s.Song_Name, s.Song_Category }).Distinct().ToList(); but it didnt work its resulting as such.

    Read the article

  • FreeText Query is slow - includes TOP and Order By

    - by Eric P
    The Product table has 700K records in it. The query: SELECT TOP 1 ID, Name FROM Product WHERE contains(Name, '"White Dress"') ORDER BY DateMadeNew desc takes about 1 minute to run. There is an non-clustered index on DateMadeNew and FreeText index on Name. If I remove TOP 1 or Order By - it takes less then 1 second to run. Here is the link to execution plan. http://screencast.com/t/ZDczMzg5N Looks like FullTextMatch has over 400K executions. Why is this happening? How can it be made faster?

    Read the article

  • Help with SQL query in C#

    - by DanSogaard
    I'm trying to rename the columns. The syntax should be the column name between double quotes incase of two words, like this: SELECT p_Name "Product Name" from items So I'm trying to do it in C# code like this: string sqlqry1 = "SELECT p_Name \"Prodcut Name\" from items"; But I get an error: Syntax error (missing operator) in query expression 'p_Name "Prodcut Name"'. It seems am having somthing wrong with the quotes, but I can't figure out.

    Read the article

  • sql report link with rs:Command paramaters not opening in JSF page

    - by H3wh0s33ks
    I have a report that we need to link (which we've checked to be working) to in a JSF project, the link looks like the following: http://www.example.com/report/summary&rs:Command=Render However when we try to load the page that links to it we get the following error: The reference to entity "rs:Command" must end with the ';' How can I link to the report within my pages and prevent it from trying to parse the rs:Command?

    Read the article

  • Stored proc executes >30 secs when called from website, but <1 sec when called from ssms

    - by Blootac
    I have a stored procedure that is called by a website to display data. Today the web page has started timing out so I got profiler going and saw the query that was taking too long. I then ran the same query in management studio, under the same user login, and it takes less than a second to return. Is there anything obvious that could be causing this? I can't think of a reason why when ASP calls the stored proc it takes 30 secs but when I call it it's fine. Thanks

    Read the article

  • SQL query multi table selection

    - by nemiss
    I have 3 tables, - Section table that defines some general item sections. - Category table - has a "section" column (foreign key). - Product table - has a "category" column (foreign key). I want to get all products that belong to X section. How can I do it? select from select?

    Read the article

  • Is INT the correct datatype for ABS(CHECKSUM(NEWID()))?

    - by Chad Sellers
    I'm in the process of creating unique customers ID's that is an alternative Id for external use. In the process of adding a new column "cust_uid" with datatype INT for my unique ID's, When I do an INSERT into this new column: Insert Into Customers(cust_uid) Select ABS(CHECKSUM(NEWID())) I get a error: Could not create an acceptable cursor. OLE DB provider "SQLNCLI" for linked server "SHQ2IIS1" returned message "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done. I've check all data types on both tables and the only things that has changed is the new column in both tables. The update is being done on one Big @$$ table...and for reasons above my pay grade, we would like to have new uid's that are different form the one's that we currently have "so users don't know how many accounts we actually have." Is INT the correct datatype for ABS(CHECKSUM(NEWID())) ?

    Read the article

  • What is the most effective and flexible way to generate combinations in TSQL?

    - by SDReyes
    What is the most effective and flexible way to generate combinations in TSQL? With 'Flexible', I mean you should be able to add easily combination rules. e.g.: to generate combinatories of 'n' elements, sorting, remove duplicates, get combinatories where each prize belongs to a different lottery, etc. For example, Having a set of numbers representing lottery prizes. Number | Position | Lottery --------------------------- 12 | 01 | 67 12 | 02 | 67 34 | 03 | 67 43 | 01 | 89 72 | 02 | 89 33 | 03 | 89 (I include the position column because, a number could be repeated among different lottery's prizes) I would like to generate combinatories like: Numbers | Lotteries ------------------- 12 12 | 67 67 12 34 | 67 67 12 34 | 67 67 12 43 | 67 89 12 72 | 67 89 12 33 | 67 89 . . .

    Read the article

  • Use Linq to SQL to generate sales report

    - by Richard Reddy
    I currently have the following code to generate a sales report over the last 30 days. I'd like to know if it would be possible to use linq to generate this report in one step instead of the rather basic loop I have here. For my requirement, every day needs to return a value to me so if there are no sales for any day then a 0 is returned. Any of the Sum linq examples out there don't explain how it would be possible to include a where filter so I am confused on how to get the total amount per day, or a 0 if no sales, for the last days I pass through. Thanks for your help, Rich //setup date ranges to use DateTime startDate = DateTime.Now.AddDays(-29); DateTime endDate = DateTime.Now.AddDays(1); TimeSpan startTS = new TimeSpan(0, 0, 0); TimeSpan endTS = new TimeSpan(23, 59, 59); using (var dc = new DataContext()) { //get database sales from 29 days ago at midnight to the end of today var salesForDay = dc.Orders.Where(b => b.OrderDateTime > Convert.ToDateTime(startDate.Date + startTS) && b.OrderDateTime <= Convert.ToDateTime(endDate.Date + endTS)); //loop through each day and sum up the total orders, if none then set to 0 while (startDate != endDate) { decimal totalSales = 0m; DateTime startDay = startDate.Date + startTS; DateTime endDay = startDate.Date + endTS; foreach (var sale in salesForDay.Where(b => b.OrderDateTime > startDay && b.OrderDateTime <= endDay)) { totalSales += (decimal)sale.OrderPrice; } Response.Write("From Date: " + startDay + " - To Date: " + endDay + ". Sales: " + String.Format("{0:0.00}", totalSales) + "<br>"); //move to next day startDate = startDate.AddDays(1); } }

    Read the article

  • How can I improve my select query for storing large versioned data sets?

    - by Jason Francis
    At work, we build large multi-page web applications, consisting mostly of radio and check boxes. The primary purpose of each application is to gather data, but as users return to a page they have previously visited, we report back to them their previous responses. Worst-case scenario, we might have up to 900 distinct variables and around 1.5 million users. For several reasons, it makes sense to use an insert-only approach to storing the data (as opposed to update-in-place) so that we can capture historical data about repeated interactions with variables. The net result is that we might have several responses per user per variable. Our table to collect the responses looks something like this: CREATE TABLE [dbo].[results]( [id] [bigint] IDENTITY(1,1) NOT NULL, [userid] [int] NULL, [variable] [varchar](8) NULL, [value] [tinyint] NULL, [submitted] [smalldatetime] NULL) Where id serves as the primary key. Virtually every request results in a series of insert statements (one per variable submitted), and then we run a select to produce previous responses for the next page (something like this): SELECT t.id, t.variable, t.value FROM results t WITH (NOLOCK) WHERE t.userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') AND t.id IN (SELECT MAX(id) AS id FROM results WITH (NOLOCK) WHERE userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') GROUP BY variable) Which, in this case, would return the most recent responses for the variables "internat", "veteran", and "athlete" for user 2111846. We have followed the advice of the database tuning tools in indexing the tables, and against our data, this is the best-performing version of the select query that we have been able to come up with. Even so, there seems to be significant performance degradation as the table approaches 1 million records (and we might have about 150x that). We have a fairly-elegant solution in place for sharding the data across multiple tables which has been working quite well, but I am open for any advice about how I might construct a better version of the select query. We use this structure frequently for storing lots of independent data points, and we like the benefits it provides. So the question is, how can I improve the performance of the select query? I assume the nested select statement is a bad idea, but I have yet to find an alternative that performs as well. Thanks in advance. NB: Since we emphasize creating over reading in this case, and since we never update in place, there doesn't seem to be any penalty (and some advantage) for using the NOLOCK directive in this case.

    Read the article

  • Connecting to an MS SQL Server from Silverlight?

    - by cam
    Normally, I would use a PHP webservice to do this, but since the front-end is hosted on a linux box, I need another way to do this (so I don't have to go through the trouble of installing FreeTDS, etc. I will if I have to). Is there a better way to do this? I'm not a web guy, but I'm trying my best.

    Read the article

< Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >