Search Results

Search found 28024 results on 1121 pages for 'sql 2014'.

Page 353/1121 | < Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >

  • how to get particular column distinct in linq to sql

    - by kart
    Hi All, Am having columns as category and songs in my table for each category there are almost 10 songs and in total there are 7 category such that which was tabled as category1 songCategory1a category1 songCategory1b category1 songCategory1c --- category2 songCategory2a category2 songCategory2b category2 songCategory2c --- category3 songCategory3a category3 songCategory3b category3 songCategory3c --- like that there is table in that i want to get the result as category1 category2 category3 category4 kindly any one help me , i tried (from s in _context.db_songs select new { s.Song_Name, s.Song_Category }).Distinct().ToList(); but it didnt work its resulting as such.

    Read the article

  • How can I improve my select query for storing large versioned data sets?

    - by Jason Francis
    At work, we build large multi-page web applications, consisting mostly of radio and check boxes. The primary purpose of each application is to gather data, but as users return to a page they have previously visited, we report back to them their previous responses. Worst-case scenario, we might have up to 900 distinct variables and around 1.5 million users. For several reasons, it makes sense to use an insert-only approach to storing the data (as opposed to update-in-place) so that we can capture historical data about repeated interactions with variables. The net result is that we might have several responses per user per variable. Our table to collect the responses looks something like this: CREATE TABLE [dbo].[results]( [id] [bigint] IDENTITY(1,1) NOT NULL, [userid] [int] NULL, [variable] [varchar](8) NULL, [value] [tinyint] NULL, [submitted] [smalldatetime] NULL) Where id serves as the primary key. Virtually every request results in a series of insert statements (one per variable submitted), and then we run a select to produce previous responses for the next page (something like this): SELECT t.id, t.variable, t.value FROM results t WITH (NOLOCK) WHERE t.userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') AND t.id IN (SELECT MAX(id) AS id FROM results WITH (NOLOCK) WHERE userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') GROUP BY variable) Which, in this case, would return the most recent responses for the variables "internat", "veteran", and "athlete" for user 2111846. We have followed the advice of the database tuning tools in indexing the tables, and against our data, this is the best-performing version of the select query that we have been able to come up with. Even so, there seems to be significant performance degradation as the table approaches 1 million records (and we might have about 150x that). We have a fairly-elegant solution in place for sharding the data across multiple tables which has been working quite well, but I am open for any advice about how I might construct a better version of the select query. We use this structure frequently for storing lots of independent data points, and we like the benefits it provides. So the question is, how can I improve the performance of the select query? I assume the nested select statement is a bad idea, but I have yet to find an alternative that performs as well. Thanks in advance. NB: Since we emphasize creating over reading in this case, and since we never update in place, there doesn't seem to be any penalty (and some advantage) for using the NOLOCK directive in this case.

    Read the article

  • How to parse a date from an SSIS Excel filename

    - by user327045
    I want to use the foreach container to iterate through a folder matching something like: "Filename_MMYYYY.xls". That's easy enough to do; but I can't seem to find a way to parse the MMYYYY from the filename and add it to a variable (or something) that i can use as a lookup field for my DimDate table. It seems possible with a flat file data source, but not an excel connection. I'm using Visual Studio 2005. Please help!

    Read the article

  • Use Linq to SQL to generate sales report

    - by Richard Reddy
    I currently have the following code to generate a sales report over the last 30 days. I'd like to know if it would be possible to use linq to generate this report in one step instead of the rather basic loop I have here. For my requirement, every day needs to return a value to me so if there are no sales for any day then a 0 is returned. Any of the Sum linq examples out there don't explain how it would be possible to include a where filter so I am confused on how to get the total amount per day, or a 0 if no sales, for the last days I pass through. Thanks for your help, Rich //setup date ranges to use DateTime startDate = DateTime.Now.AddDays(-29); DateTime endDate = DateTime.Now.AddDays(1); TimeSpan startTS = new TimeSpan(0, 0, 0); TimeSpan endTS = new TimeSpan(23, 59, 59); using (var dc = new DataContext()) { //get database sales from 29 days ago at midnight to the end of today var salesForDay = dc.Orders.Where(b => b.OrderDateTime > Convert.ToDateTime(startDate.Date + startTS) && b.OrderDateTime <= Convert.ToDateTime(endDate.Date + endTS)); //loop through each day and sum up the total orders, if none then set to 0 while (startDate != endDate) { decimal totalSales = 0m; DateTime startDay = startDate.Date + startTS; DateTime endDay = startDate.Date + endTS; foreach (var sale in salesForDay.Where(b => b.OrderDateTime > startDay && b.OrderDateTime <= endDay)) { totalSales += (decimal)sale.OrderPrice; } Response.Write("From Date: " + startDay + " - To Date: " + endDay + ". Sales: " + String.Format("{0:0.00}", totalSales) + "<br>"); //move to next day startDate = startDate.AddDays(1); } }

    Read the article

  • SQL return error within PHP

    - by Luke
    I use GET to get the id of a result. $id = $_GET['id']; I then use the following code: <? $q = $database->friendlyDetails($id); while( $row=mysql_fetch_assoc($q) ) { $hu = $row['home_user']; $ht = $row['home_team']; $hs = $row['home_score']; $au = $row['away_user']; $at = $row['away_team']; $as = $row['away_score']; $game = $row['game']; $name = $row['name']; $match = $row['match_report1']; $compid = $row['compid']; $date = $row['date_submitted']; $sub = $row['user_submitted']; } ?> And friendDetails- function friendlyDetails($i) { $q = "SELECT * FROM ".TBL_SUB_RESULTS." INNER JOIN ".TBL_FRIENDLY." ON ".TBL_FRIENDLY.".id = ".TBL_SUB_RESULTS.".compid WHERE ".TBL_SUB_RESULTS.".id = '$i'"; return mysql_query($q, $this->connection); } For some reason, the code will only return what is under id =1. Can anyone see anything obvious I am doing wrong?

    Read the article

  • Oracle SQL: Query results from previous X isoweeks () (where X might be > 52)

    - by tommy-o-dell
    How could I adapt this query to show the previous 61 weeks? (still exlcluding the current week). My query currently shows me the total weekly sales for 2010 grouped by ISO Week and ISO Year (exlcuding the current week). select to_char(order_date,'IYYY') as iso_year, to_char(order_date,'IW') as iso_week, sum(sale_amount) from orders where to_char(order_date,'IW') <> to_char(SYSDATE) and to_char(order_date,'IYYY') = 2010 group by to_char(order_date,'IYYY') to_char(order_date,'IW') I realize I could probably just omit the "2010" requirement, order by desc and limit results to a certain bnumber of rows. But that just doesn't seem right! Much appreciate any help pointing me in the right direction!

    Read the article

  • Setting custom SQL in django admin

    - by eugene y
    I'm trying to set up a proxy model in django admin. It will represent a subset of the original model. The code from models.py: class MyManager(models.Manager): def get_query_set(self): return super(MyManager, self).get_query_set().filter(some_column='value') class MyModel(OrigModel): objects = MyManager() class Meta: proxy = True Now instead of filter() I need to use a complex SELECT statement with JOINS. What's the proper way to inject it wholly to the custom manager?

    Read the article

  • Database normalization and duplicate values

    - by bretddog
    Consider a Parent / Child / GrandChild structure in a database table schema, or even a deeper hierarchy. These being in the same aggregate. One table DAYS keeps a single row per day, and has a "Date" field. This is the root table, or maybe a child of the root. No row can ever be deleted in this table. In this case, however complex my table schema looks like, however far away in the hierarchy any other table is, is there any reason why any other table would hold a Date value? Can't it instead just have a FK to the DAYS table. I obviously assume that the creation of these date fields happen not before such datefield exist in the DAYS table. I'm now thinking just about the date part to be relevant, not the time part. Not sure if all databases can store these individually. That's maybe relevant, but not really the focus of the question.

    Read the article

  • Multiple column sorting (SQL SERVER 2005)

    - by Newbie
    I have a table which looks like Col1 col2 col3 col4 col5 1 5 1 4 6 1 4 0 3 7 0 1 5 6 3 1 8 2 1 5 4 3 2 1 4 The script is declare @t table(col1 int, col2 int, col3 int,col4 int,col5 int) insert into @t select 1,5,1,4,6 union all select 1,4,0,3,7 union all select 0,1,5,6,3 union all select 1,8,2,1,5 union all select 4,3,2,1,4 I want the output to be every column being sorted in ascending order i.e. Col1 col2 col3 col4 col5 0 1 0 1 3 1 3 1 1 4 1 4 2 3 5 1 5 2 4 6 4 8 5 6 7 I already solved tye problem by the folowing program Select x1.col1 ,x2.col2 ,x3.col3 ,x4.col4 ,x5.col5 From (Select Row_Number() Over(Order By col1) rn1, col1 From @t)x1 Join(Select Row_Number() Over(Order By col2) rn2, col2 From @t)x2 On x1.rn1=x2.rn2 Join(Select Row_Number() Over(Order By col3) rn3, col3 From @t)x3 On x1.rn1=x3.rn3 Join(Select Row_Number() Over(Order By col4) rn4, col4 From @t)x4 On x1.rn1=x4.rn4 Join(Select Row_Number() Over(Order By col5) rn5, col5 From @t)x5 On x1.rn1=x5.rn5 But I am not happy with this solution. Is there any better way to achieve the same? (Using set based approach) If so, could any one please show an example. Thanks

    Read the article

  • T-SQL Getting duplicate rows returned

    - by cBlaine
    The following code section is returning multiple columns for a few records. SELECT a.ClientID,ltrim(rtrim(c.FirstName)) + ' ' + case when c.MiddleName <> '' then ltrim(rtrim(c.MiddleName)) + '. ' else '' end + ltrim(rtrim(c.LastName)) as ClientName, a.MISCode, b.Address, b.City, dbo.ClientGetEnrolledPrograms(CONVERT(int,a.ClientID)) as Abbreviation FROM ClientDetail a JOIN Address b on(a.PersonID = b.PersonID) JOIN Person c on(a.PersonID = c.PersonID) LEFT JOIN ProgramEnrollments d on(d.ClientID = a.ClientID and d.Status = 'Enrolled' and d.HistoricalPKID is null) LEFT JOIN Program e on(d.ProgramID = e.ProgramID and e.HistoricalPKID is null) WHERE a.MichiganWorksData=1 I've isolated the issue to the ProgramEnrollments table. This table holds one-to-many relationships where each ClientID can be enrolled in many programs. So for each program a client is enrolled in, there is a record in the table. The final result set is therefore returning a row for each row in the ProgramEnrollments table based on these joins. I presume my join is the issue but I don't see the problem. Thoughts/Suggestions? Thanks, Chuck

    Read the article

  • How I can move table to another filegroup ?

    - by denisioru
    Hello, I have MSSQL 2008 Ent and OLTP database with two big tables. How I can move this tables to another filegroup without service interrupting? Now, about 100-130 records inserted and 30-50 records updated each second in this tables. Each table have about 100M records and six fields (including one field geography). I looking for solution via google, but all solutions contain "create second table, insert rows from first table, drop first table, bla bla bla". Can I use partitioning functions for solving this problem? Thank you.

    Read the article

  • Pivot to obtain EAV data

    - by Snowy
    I have an EAV table (simple key/value in every row) and I need to take the 'value' from two of the rows and concat them into a single row with a single column. I can't seem to get through the part where I just have the pivot straight. Can anyone help me figure this out? Declare @eavHelp Table ( [Key] VARCHAR (8) NOT NULL, [Value] VARCHAR (8) NULL ) Insert Into @eavHelp Values ( 'key1' , 'aaa' ) Insert Into @eavHelp Values ( 'key2' , 'bbb' ) Select * From @eavHelp Pivot ( Min( [Value] ) For [Value] in ( hmm1 , hmm2 ) ) as Piv Where [Key] = 'key1' or [Key] = 'key2' That makes: Key hmm1 hmm2 -------- -------- -------- key1 NULL NULL key2 NULL NULL But what I want to make is: hmmmX ----- aaa;bbb

    Read the article

  • sql report link with rs:Command paramaters not opening in JSF page

    - by H3wh0s33ks
    I have a report that we need to link (which we've checked to be working) to in a JSF project, the link looks like the following: http://www.example.com/report/summary&rs:Command=Render However when we try to load the page that links to it we get the following error: The reference to entity "rs:Command" must end with the ';' How can I link to the report within my pages and prevent it from trying to parse the rs:Command?

    Read the article

  • Set time part of datetime variable to 18:00

    - by maxt3r
    Hi. I need to set datetime variable to two days from now but it's time part must be 18:00. For example if i call getdate() now i'll get 2010-05-17 13:18:07.260. I need to set it to 2010-05-19 18:00:00.000. Does anybody have a good snippet for that or any ideas how to do it right?

    Read the article

  • Select records by comparing subsets

    - by devnull
    Given two tables (the rows in each table are distinct): 1) x | y z 2) x | y z ------- --- ------- --- 1 | a a 1 | a a 1 | b b 1 | b b 2 | a 1 | c 2 | b 2 | a 2 | c 2 | b 2 | c Is there a way to select the values in the x column of the first table for which all the values in the y column (for that x) are found in the z column of the second table? In case 1), expected result is 1. If c is added to the second table then the expected result is 2. In case 2), expected result is no record since neither of the subsets in the first table matches the subset in the second table. If c is added to the second table then the expected result is 1, 2. I've tried using except and intersect to compare subsets of first table with the second table, which works fine, but it takes too long on the intersect part and I can't figure out why (the first table has about 10.000 records and the second has around 10). EDIT: I've updated the question to provide an extra scenario.

    Read the article

  • Asynchronous SQL Operations

    - by Paul Hatcherian
    I've got a problem I'm not sure how best to solve. I have an application which updates a database in response to ad hoc requests. One request in particular is quite common. The request is an update that by itself is quite simple, but has some complex preconditions. For this request the business layer first requests a set of data from the data layer. The business logic layer evaluated the data from the database and parameters from the request, from this the action to be performed is determined, and the request's response message(s) are created. The business layer now executes the actual update command that is the purpose of the request. This last step is the problem, this command is dependent on the state of the database, which might have changed since the business logic ran. Locking down the data read in this operation across several round-trips to the database doesn't seem like a good idea either. Is there a 'best-practice' way to accomplish something like this? Thanks!

    Read the article

  • Adding more OR searches with CONTAINS Brings Query to Crawl

    - by scolja
    I have a simple query that relies on two full-text indexed tables, but it runs extremely slow when I have the CONTAINS combined with any additional OR search. As seen in the execution plan, the two full text searches crush the performance. If I query with just 1 of the CONTAINS, or neither, the query is sub-second, but the moment you add OR into the mix the query becomes ill-fated. The two tables are nothing special, they're not overly wide (42 cols in one, 21 in the other; maybe 10 cols are FT indexed in each) or even contain very many records (36k recs in the biggest of the two). I was able to solve the performance by splitting the two CONTAINS searches into their own SELECT queries and then UNION the three together. Is this UNION workaround my only hope? Thanks. SELECT a.CollectionID FROM collections a INNER JOIN determinations b ON a.CollectionID = b.CollectionID WHERE a.CollrTeam_Text LIKE '%fa%' OR CONTAINS(a.*, '"*fa*"') OR CONTAINS(b.*, '"*fa*"') Execution Plan (guess I need more reputation before I can post the image):

    Read the article

  • SQL query, select from 2 tables random

    - by klaus
    Hello all i have a problem that i just CANT get to work like i what it.. i want to show news and reviews (2 tables) and i want to have random output and not the same output here is my query i really hope some one can explain me what i do wrong SELECT anmeldelser.billed_sti , anmeldelser.overskrift , anmeldelser.indhold , anmeldelser.id , anmeldelser.godkendt FROM anmeldelser LIMIT 0,6 UNION ALL SELECT nyheder.id , nyheder.billed_sti , nyheder.overskrift , nyheder.indhold , nyheder.godkendt FROM nyheder ORDER BY rand() LIMIT 0,6

    Read the article

< Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >