Search Results

Search found 38660 results on 1547 pages for 'sql index'.

Page 367/1547 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • Multiple column sorting (SQL SERVER 2005)

    - by Newbie
    I have a table which looks like Col1 col2 col3 col4 col5 1 5 1 4 6 1 4 0 3 7 0 1 5 6 3 1 8 2 1 5 4 3 2 1 4 The script is declare @t table(col1 int, col2 int, col3 int,col4 int,col5 int) insert into @t select 1,5,1,4,6 union all select 1,4,0,3,7 union all select 0,1,5,6,3 union all select 1,8,2,1,5 union all select 4,3,2,1,4 I want the output to be every column being sorted in ascending order i.e. Col1 col2 col3 col4 col5 0 1 0 1 3 1 3 1 1 4 1 4 2 3 5 1 5 2 4 6 4 8 5 6 7 I already solved tye problem by the folowing program Select x1.col1 ,x2.col2 ,x3.col3 ,x4.col4 ,x5.col5 From (Select Row_Number() Over(Order By col1) rn1, col1 From @t)x1 Join(Select Row_Number() Over(Order By col2) rn2, col2 From @t)x2 On x1.rn1=x2.rn2 Join(Select Row_Number() Over(Order By col3) rn3, col3 From @t)x3 On x1.rn1=x3.rn3 Join(Select Row_Number() Over(Order By col4) rn4, col4 From @t)x4 On x1.rn1=x4.rn4 Join(Select Row_Number() Over(Order By col5) rn5, col5 From @t)x5 On x1.rn1=x5.rn5 But I am not happy with this solution. Is there any better way to achieve the same? (Using set based approach) If so, could any one please show an example. Thanks

    Read the article

  • Stored procedure and trigger

    - by noober
    Hello all, I had a task -- to create update trigger, that works on real table data change (not just update with the same values). For that purpose I had created copy table then began to compare updated rows with the old copied ones. When trigger completes, it's neccessary to actualize the copy: UPDATE CopyTable SET id = s.id, -- many, many fields FROM MainTable s WHERE s.id IN (SELECT [id] FROM INSERTED) AND CopyTable.id = s.id; I don't like to have this ugly code in trigger anymore, so I has extracted it to a stored procedure: CREATE PROCEDURE UpdateCopy AS BEGIN UPDATE CopyTable SET id = s.id, -- many, many fields FROM MainTable s WHERE s.id IN (SELECT [id] FROM INSERTED) AND CopyTable.id = s.id; END The result is -- Invalid object name 'INSERTED'. How can I workaround this? Regards,

    Read the article

  • ignore some values in insert into select from sql stament

    - by nitroxn
    Suppose that I have a Table Symbols(Symbol, Value) and a Table SymbolValues (Symbol, Value) which contains a list of values for the symbol. How to choose maximum values fromt he SymbolValues table and insert into Symbols table. For Example, The SymbolValues Table has following values A 1 A 2 A 3 B 6 B 7 Then only A 3 and B 7 should be inserted in the Symbols table. Is this possible using insert into select statement. Thanks

    Read the article

  • Is this SQL select code following good practice?

    - by acidzombie24
    I am using sqlite and will port to mysql (5) later. I wanted to know if I am doing something I shouldnt be doing. I tried purposely to design so I'll compare to 0 instead of 1 (I changed hasApproved to NotApproved to do this, not a big deal and I haven't written any code). I was told I never need to write a subquery but I do here. My Votes table is just id, ip, postid (I don't think I can write that subquery as a join instead?) and that's pretty much all that is on my mind. Naming conventions I don't really care about since the tables are created via reflection and is all over the place. select id, name, body, upvotes, downvotes, (select 1 from UpVotes where IPAddr=? AND post=Post.id) as myup, (select 1 from DownVotes where IPAddr=@0 AND post=Post.id) as mydown from Post where flag = '0' limit ?, ?"

    Read the article

  • How I can move table to another filegroup ?

    - by denisioru
    Hello, I have MSSQL 2008 Ent and OLTP database with two big tables. How I can move this tables to another filegroup without service interrupting? Now, about 100-130 records inserted and 30-50 records updated each second in this tables. Each table have about 100M records and six fields (including one field geography). I looking for solution via google, but all solutions contain "create second table, insert rows from first table, drop first table, bla bla bla". Can I use partitioning functions for solving this problem? Thank you.

    Read the article

  • Keeping Linq to SQL alive when using ViewModels (ASP.NET MVC)

    - by Kohan
    I have recently started using custom ViewModels (For example, CustomerViewModel) public class CustomerViewModel { public IList<Customer> Customers{ get; set; } public int ProductId{ get; set; } public CustomerViewModel(IList<Customer> customers, int productId) { this.Customers= customers; this.ProductId= productId; } public CustomerViewModel() { } } ... and am now passing them to my view instead of the Entities themselves (for example, var Custs = repository.getAllCusts(id) ) as it seems good practice to do so. The problem i have encountered is that when using ViewModels; by the time it has got to the the view i have lost the ability to lazy load on customers. I do not believe this was the case before using them. Is it possible to retain the ability of Lazy Loading while still using ViewModels? Or do i have to eager load using this method? Thanks, Kohan.

    Read the article

  • SQL Server Stored Procedure that return processed records number

    - by Ras
    I have a winform application that fires a Stored Procedure which elaborates several records (around 500k). In order to inform the user about how many record have been processed, I would need a SP which returns a value every n records. For example, every 1000 row processed (most are INSERT). Otherwise I would be able only to inform when ALL record are processed. Any hints how to solve this? I thought it could be useful to use a trigger or some scheduled task, but I cannot figure out how to implement it.

    Read the article

  • SQL query showing missing expression

    - by Ashok Dasari
    Query to pull the data between Yseterday 6AM to today 6AM ... SELECT lot_id, log_time, batch_no, eqp_id, STATION_ID, EXTRACTVALUE (META_DATA, '/lot_info/A3') AS A3, EXTRACTVALUE (META_DATA, '/lot_info/A3Info') AS A3Info, EXTRACTVALUE ( META_DATA, '/lot_info/apc_status_info' ) AS apc_status_info FROM t_dlis_log_history WHERE ( (EQP_ID = 'ALC4360') OR (EQP_ID = 'ALC4361') OR (EQP_ID = 'ALC1360') OR (EQP_ID = 'ALC1361') OR (EQP_ID = 'ALC1362') OR (EQP_ID = 'ALC1363') OR (EQP_ID = 'ALC1364') OR (EQP_ID = 'ALC1365') OR (EQP_ID = 'ALC355') OR (EQP_ID = 'ALC353') OR (EQP_ID = 'ALC4350') OR (EQP_ID = 'ALC354') ) AND (( log_time >= DATEADD ( HOUR, 6, CONVERT(VARCHAR (10), GETDATE (), 110) ) AND ( log_time <= DATEADD ( HOUR, 6, CONVERT(VARCHAR (10), GETDATE () + 1, 110) ) ) ) It is showing error missing expression ...

    Read the article

  • SQL Design Question regarding schema and if Name value pair is the best solution

    - by Aur
    I am having a small problem trying to decide on database schema for a current project. I am by no means a DBA. The application parses through a file based on user input and enters that data in the database. The number of fields that can be parsed is between 1 and 42 at the current moment. The current design of the database is entirely flat with there being 42 columns; some have repeated columns such as address1, address2, address3, etc... This says that I should normalize the data. However, data integrity is not needed at this moment and the way the data is shaped I'm looking at several joins. Not a bad thing but the data is still in a 1 to 1 relationship and I still see a lot of empty fields per row. So my concerns are that this does not allow the database or the application to be very extendable. If they want to add more fields to be parsed (which they do) than I'd need to create another table and add another foreign key to the linking table. The third option is I have a table where the fields are defined and a table for each record. So what I was thinking is to make a table that stores the value and then links to those two tables. The problem is I can picture the size of that table growing large depending on the input size. If someone gives me a file with 300,000 records than 300,000 x 40 = 12 million so I have some reservations. However I think if I get to that point than I should be happy it is being used. This option also allows for more custom displaying of information albeit a bit more work but little rework even if you add more fields. So the problem boils down to: 1. Current design is a flat file which makes extending it hard and it is not normalized. 2. Normalize the tables although no real benefits for the moment but requirements change. 3. Normalize it down into the name value pair and hope size doesn't hurt. There are a large number of inserts, updates, and selects against that table. So performance is a worry but I believe the saying is design now, performance testing later? I'm probably just missing something practical so any comments would be appreciated even if it’s a quick sanity check. Thank you for your time.

    Read the article

  • How to get only the date from the sql server

    - by Pradeep
    this is what i want. but i have put only a specified date. SELECT BookName, Author, BookPrice FROM Book WHERE Book.Book_ID = ( SELECT Book_ID FROM Temp_Order WHERE Temp_Order.User_ID = 25 AND Temp_Order.OrderDate='3/24/2010' ) this is the date function i used. but it takes the time also. how to stop it. please help me SELECT Book_ID, BookName,Author,BookPrice FROM Book INNER JOIN FavCategory ON Book.Category_ID = FavCategory.Category_ID WHERE FavCategory.User_ID = " + useridlabel.Text + " AND OrderDate = **GETDATE()**

    Read the article

  • SQL - table alias scope.

    - by Support - multilanguage SO
    I've just learned ( yesterday ) to use "exists" instead of "in". BAD select * from table where nameid in ( select nameid from othertable where otherdesc = 'SomeDesc' ) GOOD select * from table t where exists ( select nameid from othertable o where t.nameid = o.nameid and otherdesc = 'SomeDesc' ) And I have some questions about this: 1) The explanation as I understood was: "The reason why this is better is because only the matching values will be returned instead of building a massive list of possible results". Does that mean that while the first subquery might return 900 results the second will return only 1 ( yes or no )? 2) In the past I have had the RDBMS complainin: "only the first 1000 rows might be retrieved", this second approach would solve that problem? 3) What is the scope of the alias in the second subquery?... does the alias only lives in the parenthesis? for example select * from table t where exists ( select nameid from othertable o where t.nameid = o.nameid and otherdesc = 'SomeDesc' ) AND select nameid from othertable o where t.nameid = o.nameid and otherdesc = 'SomeOtherDesc' ) That is, if I use the same alias ( o for table othertable ) In the second "exist" will it present any problem with the first exists? or are they totally independent? Is this something Oracle only related or it is valid for most RDBMS? Thanks a lot

    Read the article

  • SQL Like query last match

    - by teepusink
    Hi, I've a database that has a name field. (i.e Firstname M. Lastname or just Firstname Lastname). Trying to filter by lastname. How can I do a query to find the last space? Something like select * from person where name like "% a%" (but the space is the last space) Thanks, Tee

    Read the article

  • Send 404 when requesting index.php through .htaccess?

    - by Daniel
    I've recently refactored an existing CodeIgniter application to use url segments instead of query strings, and I'm using a rewriterule in htaccess to rewrite stuff to index.php: RewriteRule ^(.*)$ /index.php/$1 [L] My problem right now is that a lot of this website's pages are indexed by google with a link to index.php. Since I made the change to use url segments instead, I don't care about these google results anymore and I want to send a 404 (no need to use 301 Move permanently, there have been enough changes, it'll just have to recrawl everything). To get to the point: How do I redirect requests to /index.php?whatever to a 404 page? I was thinking of rewriting to a non-existent file that would cause apache to send a 404. Would this be an acceptable solution? How would the rewriterule for that look like? edit: Currently, existing google results will just cause the following error: An Error Was Encountered The URI you submitted has disallowed characters.

    Read the article

  • Help with Parameterizing SQL Query for C# with possible null values

    - by Jhonny D. Cano -Leftware-
    Hello, people. I need help with parameterizing this query. SELECT * FROM greatTable WHERE field1 = @field1 AND field2 = @field2 The user should be able to search for any of the 2 fields, and the user also should be able to search if the field2 has null values. var query = "theQuery"; var cm = new SqlCommand(cn, query); cm.AddParameter("@field1", "352515"); cm.AddParameter("@field2", DBNull.Value); // my DataTable here is having 0 records var dt = GetTable(cm);

    Read the article

  • Select records by comparing subsets

    - by devnull
    Given two tables (the rows in each table are distinct): 1) x | y z 2) x | y z ------- --- ------- --- 1 | a a 1 | a a 1 | b b 1 | b b 2 | a 1 | c 2 | b 2 | a 2 | c 2 | b 2 | c Is there a way to select the values in the x column of the first table for which all the values in the y column (for that x) are found in the z column of the second table? In case 1), expected result is 1. If c is added to the second table then the expected result is 2. In case 2), expected result is no record since neither of the subsets in the first table matches the subset in the second table. If c is added to the second table then the expected result is 1, 2. I've tried using except and intersect to compare subsets of first table with the second table, which works fine, but it takes too long on the intersect part and I can't figure out why (the first table has about 10.000 records and the second has around 10). EDIT: I've updated the question to provide an extra scenario.

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >