Search Results

Search found 36773 results on 1471 pages for 'sql statement syntax'.

Page 356/1471 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • LINQ to SQL: On load processing of lazy loaded associations

    - by Matt Holmes
    If I have an object that lazy loads an association with very large objects, is there a way I can do processing at the time the lazy load occurs? I thought I could use AssociateWith or LoadWith from DataLoadOptions, but there are very, very specific restrictions on what you can do in those. Basically I need to be notified when an EntitySet< decides it's time to load the associated object, so I can catch that event and do some processing on the loaded object. I don't want to simply walk through the EntitySet when I load the parent object, because that will force all the lazy loaded items to load (defeating the purpose of lazy loading entirely).

    Read the article

  • Clustered index - multi-part vs single-part index and effects of inserts/deletes

    - by Anssssss
    This question is about what happens with the reorganizing of data in a clustered index when an insert is done. I assume that it should be more expensive to do inserts on a table which has a clustered index than one that does not because reorganizing the data in a clustered index involves changing the physical layout of the data on the disk. I'm not sure how to phrase my question except through an example I came across at work. Assume there is a table (Junk) and there are two queries that are done on the table, the first query searches by Name and the second query searches by Name and Something. As I'm working on the database I discovered that the table has been created with two indexes, one to support each query, like so: --drop table Junk1 CREATE TABLE Junk1 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name ON Junk1 ( Name ) CREATE NONCLUSTERED INDEX IX_Name_Something ON Junk1 ( Name, Something ) Now when I looked at the two indexes, it seems that IX_Name is redundant since IX_Name_Something can be used by any query that desires to search by Name. So I would eliminate IX_Name and make IX_Name_Something the clustered index instead: --drop table Junk2 CREATE TABLE Junk2 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name_Something ON Junk2 ( Name, Something ) Someone suggested that the first indexing scheme should be kept since it would result in more efficient inserts/deletes (assume that there is no need to worry about updates for Name and Something). Would that make sense? I think the second indexing method would be better since it means one less index needs to be maintained. I would appreciate any insight into this specific example or directing me to more info on maintenance of clustered indexes.

    Read the article

  • How to select records as columns in SQL

    - by Leigh
    Hi, I have two tables: tblSizes and tblColors. tblColors has columns called ColorName, ColorPrice and SizeID. There is one size to multiple colors. I need to write a query to select the size and all the colors (as columns) for a that size with the price of each size in its respective column. The colors must be returned as columns, for instance: SizeID : Width : Height : Red : Green : Blue 1---------220-----220----£15----£20-----£29 Hope this makes sense Thank you

    Read the article

  • Replication - User defined table type not propogating to subscriber

    - by Aamod Thakur
    I created a User defined table type named tvp_Shipment with two columns (id and name) . generated a snapshot and the User defined table type was properly propogated to all the subscribers. I was using this tvp in a stored procedure and everything worked fine. Then I wanted to add one more column created_date to this table valued parameter.I dropped the stored procedure (from replication too) and also i dropped and recreated the User defined table type with 3 columns and then recreated the stored procedure and enabled it for publication When i generate a new snapshot, the changes in user defined table type are not propogated to the subscriber. The newly added column was not added to the subscription. the Error messages: The schema script 'usp_InsertAirSa95c0e23_218.sch' could not be propagated to the subscriber. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147201001) Get help: http://help/MSSQL_REPL-2147201001 Invalid column name 'created_date'. (Source: MSSQLServer, Error number: 207) Get help: http://help/207

    Read the article

  • Return if remote stored procedure fails

    - by njk
    I am in the process of creating a stored procedure. This stored procedure runs local as well as external stored procedures. For simplicity, I'll call the local server [LOCAL] and the remote server [REMOTE]. USE [LOCAL] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[monthlyRollUp] AS SET NOCOUNT, XACT_ABORT ON BEGIN TRY EXEC [REOMTE].[DB].[table].[sp] --This transaction should only begin if the remote procedure does not fail BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp1] COMMIT BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp2] COMMIT BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp3] COMMIT BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp4] COMMIT END TRY BEGIN CATCH -- Insert error into log table INSERT INTO [dbo].[log_table] (stamp, errorNumber, errorSeverity, errorState, errorProcedure, errorLine, errorMessage) SELECT GETDATE(), ERROR_NUMBER(), ERROR_SEVERITY(), ERROR_STATE(), ERROR_PROCEDURE(), ERROR_LINE(), ERROR_MESSAGE() END CATCH GO When using a transaction on the remote procedure, it throws this error: OLE DB provider ... returned message "The partner transaction manager has disabled its support for remote/network transactions.". I get that I'm unable to run a transaction locally for a remote procedure. How can I ensure that the this procedure will exit and rollback if any part of the procedure fails?

    Read the article

  • Many to many table design question

    - by user169867
    Originally I had 2 tables in my DB, [Property] and [Employee]. Each employee can have 1 "Home Property" so the employee table has a HomePropertyID FK field to Property. Later I needed to model the situation where despite having only 1 "Home Property" the employee did work at or cover for multiple properties. So I created an [Employee2Property] table that has EmployeeID and PropertyID FK fields to model this many 2 many relationship. Now I find that I need to create other many-to-many relationships between employees and properties. For example if there are multiple employees that are managers for a property or multiple employees that perform maintenance work at a property, etc. My questions are: 1) Should I create seperate many-to-many tables for each of these situations or should I just create 1 more table like [PropertyAssociatonType] that lists the types of associations an emploee can have with a property and just add a FK field to [Employee2Property] such a PropertyAssociationTypeID that explains what the association is? I'm curious about the pros/cons or if there's another better way. 2) Am I stupid and going about this all worng? Thanks for any suggestions :)

    Read the article

  • How to set two column unique in SQL.

    - by sxingfeng
    I am creating a table ,in the table two column is unique, I mean columnA and columnB do not have same value: such as : Table X A B 1 2(RIGHT,unique) 2 2(RIGHT, unique) 1 3(RIGHT, not unique) 2 3(RIGHT, not unique) 1 2 (WRONG, not unique) How to create such a table? many thanks!

    Read the article

  • Why hasn't MSSQL made a WHERE clause mandatory by default?

    - by Josh Einstein
    It seems like a no brainer to me. I've heard countless stories about people forgetting the WHERE clause in an UPDATE or DELETE and trashing an entire table. I know that careless people shouldn't be issuing queries directly and all that... and that there are legitimate cases where you want to affect all rows, but wouldn't it make sense to have an option on by default that requires such queries to be written like: UPDATE MyTable SET MyColumn = 0 WHERE * Or without changing the language, UPDATE MyTable SET MyColumn = 0 WHERE 1 = 1 -- tacky, I know

    Read the article

  • How to Expression.Invoke an arbitrary LINQ 2 SQL Query

    - by Remus Rusanu
    Say I take an arbitrary LINQ2SQL query's Expression, is it possible to invoke it somehow? MyContext ctx1 = new MyContext("..."); var q = from t in ctx1.table1 where t.id = 1 select t; Expression qe = q.Expression; var res = Expression.Invoke(qe); This throws ArgumentException "Expression of type System.Linq.IQueryable`1[...]' cannot be invoked". My ultimate goal is to evaluate the same query on several different data contexts.

    Read the article

  • Need help with SQL Query

    - by StackOverflowNewbie
    Say I have 2 tables: Person - Id - Name PersonAttribute - Id - PersonId - Name - Value Further, let's say that each person had 2 attributes (say, gender and age). A sample record would be like this: Person->Id = 1 Person->Name = 'John Doe' PersonAttribute->Id = 1 PersonAttribute->PersonId = 1 PersonAttribute->Name = 'Gender' PersonAttribute->Value = 'Male' PersonAttribute->Id = 2 PersonAttribute->PersonId = 1 PersonAttribute->Name = 'Age' PersonAttribute->Value = '30' Question: how do I query this such that I get a result like this: 'John Doe', 'Male', '30'

    Read the article

  • Multilingual best practices on SQL Server, EF and MVC combinations

    - by dengereli
    ASP.NET MVC, resource management is look like enough for application multlingual multiculture support. But I am wondering practices about data. User stories; User set culture as en-US and see all product items in English. User set culture as fr-FR and see all product items in French. User set culture as ru-RU and see all product items in Russian. User doesn't have right change culture settings and application never reach multilingual resources, it will use default language and culture.

    Read the article

  • Count problem in SQL when I want results from diffrent tabels

    - by Nicklas
    ALTER PROCEDURE GetProducts @CategoryID INT AS SELECT COUNT(tblReview.GroupID) AS ReviewCount, COUNT(tblComment.GroupID) AS CommentCount, Product.GroupID, MAX(Product.ProductID) AS ProductID, AVG(Product.Price) AS Price, MAX (Product.Year) AS Year, MAX (Product.Name) AS Name, AVG(tblReview.Grade) AS Grade FROM tblReview, tblComment, Product WHERE (Product.CategoryID = @CategoryID) GROUP BY Product.GroupID HAVING COUNT(distinct Product.GroupID) = 1 This is what the tabels look like: **Product** |**tblReview** | **tblComment** ProductID | ReviewID | CommentID Name | Description | Description Year | GroupID | GroupID Price | Grade | GroupID GroupID is name_year of a Product, ex Nike_2010. One product can have diffrent sizes for exampel: ProductID | Name | Year | Price | Size | GroupID 1 | Nike | 2010 | 50 | 8 | Nike_2010 2 | Nike | 2010 | 50 | 9 | Nike_2010 3 | Nike | 2010 | 50 | 10 | Nike_2010 4 | Adidas| 2009 | 45 | 8 | Adidas_2009 5 | Adidas| 2009 | 45 | 9 | Adidas_2009 6 | Adidas| 2009 | 45 | 10 | Adidas_2009 I dont get the right count in my tblReview and tblComment. If I add a review to Nike size 8 and I add one review to Nike size 10 I want 2 count results when I list the products with diffrent GroupID. Now I get the same count on Reviews and Comment and both are wrong. I use a datalist to show all the products with diffrent/unique GroupID, I want it to be like this: ______________ | | | Name: Nike | | Year: 2010 | | (All Sizes) | | x Reviews | | x Comments | | x AVG Grade | |______________| All Reviewcounts, Commentcounts and the Average of all products with the same GroupID, the Average works great.

    Read the article

  • MS SQL: Primary file group is full

    - by aximili
    I have a very large table in my database and I am starting to get this error Could not allocate a new page for database 'mydatabase' because of insufficient disk space in filegroup 'PRIMARY'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. How do you fix this error? I don't understand the suggestions there.

    Read the article

  • Database schemas WAY out of sync - need to get up to date without losing data

    - by Zind
    The problem: we have one application that has a portion which is used by a very small subset of the total users, and that part of the application is running off of a separate database as well. In a perfect world, the schemas of the two databases would be synced up, but such is not the case. Some migrations have been run on the smaller database, most haven't; and furthermore, there is nothing such as revision number to be able to easily identify which have and which haven't. We would like to solve this quandary for future projects. During a discussion we've come up with the following possible plan of action, and I am wondering if anyone knows of any project which has already solved this problem: What we would like to do is create an empty database from the schema of the large fully-migrated database, and then move all of the data from the smaller non-migrated database into that empty one. If it makes things easier, it can probably be assumed for the sake of this problem specifically that no migrations have ever removed anything, only added. Else, if there are other known solutions, I'd like to hear them as well.

    Read the article

  • SQL UNION ALL with a INNER JOIN

    - by kOhm
    I'm looking for the best way to display all rows from two tables while joining first by one field (dwg) then where applicable a 2nd join on part. Table1 data consists of schematics(dwg) along with a list of parts required to build the item depicted in the drawing. Table2 consists of data about the actual parts ordered to build the schematic. Some parts in table2 are a combination of parts in table1 (ex: foo and bar in table1 were ordered as foobar in table2). I can display all rows in both tables with UNION ALL, but this doesn't join on both the dwg and part fields. I looked at FULL OUTER JOIN also, but I haven't figured out how to join first by dwg, then by part. Here is an example of the data. table1 table2 dwg part qty order dwg part qty ----- ----- ----- ----- ----- ----- ----- 123 foo 1 ord1 123 foobar 1 123 bar 1 ord1 123 bracket 2 123 widget 2 ord2 123 screw 4 123 bracket 4 ord2 123 nut 4 456 foo 1 ord2 123 widget 2 ord2 123 bracket 2 ord3 456 foo 1 Desired output: The goal is to create a view that provides visibility to all parts in table1 and the associated orders in table2 (including those parts that appear in one but not the other table) so that I can see all the drawing parts in table1 and the associated records in table2 along with records in table2 where the part wasn't in table1. part_request_order_report dwg part qty order part qty ----- ----- ----- ------ ----- ----- 123 foo 1 123 bar 1 123 widget 2 ord2 widget 2 123 bracket 4 ord1 bracket 2 123 bracket 4 ord2 bracket 2 123 ord1 foobar 1 123 ord1 screw 4 123 ord1 nut 4 456 foo 1 ord3 foo 1 Is this possible? Or am I better off iterating through the data to build the report table? Thanks in advance.

    Read the article

  • Using AVG() in Oracle SQL

    - by Viet Anh
    I have a table named Student as followed: CREATE TABLE "STUDENT" ( "ID" NUMBER(*,0), "NAME" VARCHAR2(20), "AGE" NUMBER(*,0), "CITY" VARCHAR2(20), PRIMARY KEY ("ID") ENABLE ) I am trying to get all the records of the students having a larger age than the average age. This is what I tried: SELECT * FROM student WHERE age > AVG(age) and SELECT * FROM student HAVING age > AVG(age) Both ways did not work!

    Read the article

  • SQL Query to return maximums over decades

    - by Abraham Lincoln
    My question is the following. I have a baseball database, and in that baseball database there is a master table which lists every player that has ever played. There is also a batting table, which tracks every players' batting statistics. I created a view to join those two together; hence the masterplusbatting table. CREATE TABLE `Master` ( `lahmanID` int(9) NOT NULL auto_increment, `playerID` varchar(10) NOT NULL default '', `nameFirst` varchar(50) default NULL, `nameLast` varchar(50) NOT NULL default '', PRIMARY KEY (`lahmanID`), KEY `playerID` (`playerID`), ) ENGINE=MyISAM AUTO_INCREMENT=18968 DEFAULT CHARSET=latin1; CREATE TABLE `Batting` ( `playerID` varchar(9) NOT NULL default '', `yearID` smallint(4) unsigned NOT NULL default '0', `teamID` char(3) NOT NULL default '', `lgID` char(2) NOT NULL default '', `HR` smallint(3) unsigned default NULL, PRIMARY KEY (`playerID`,`yearID`,`stint`), KEY `playerID` (`playerID`), KEY `team` (`teamID`,`yearID`,`lgID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; Anyway, my first query involved finding the most home runs hit every year since baseball began, including ties. The query to do that is the following.... select f.yearID, f.nameFirst, f.nameLast, f.HR from ( select yearID, max(HR) as HOMERS from masterplusbatting group by yearID )as x inner join masterplusbatting as f on f.yearID = x.yearId and f.HR = x.HOMERS This worked great. However, I now want to find the highest HR hitter in each decade since baseball began. Here is what I tried. select f.yearID, truncate(f.yearid/10,0) as decade,f.nameFirst, f.nameLast, f.HR from ( select yearID, max(HR) as HOMERS from masterplusbatting group by yearID )as x inner join masterplusbatting as f on f.yearID = x.yearId and f.HR = x.HOMERS group by decade You can see that I truncated the yearID in order to get 187, 188, 189 etc instead of 1897, 1885,. I then grouped by the decade, thinking that it would give me the highest per decade, but it is not returning the correct values. For example, it's giving me Adrian Beltre with 48 HR's in 2004 but everyone knows that Barry Bonds hit 73 HR in 2001. Can anyone give me some pointers?

    Read the article

  • SQL query for getting count on same table using left outer join

    - by Sasi
    Hi all, I have a table from which i need to get the count grouped on two columns. the table has two columns one datetime column and another one is success value(-1,1,0) What i am looking for is something like this... count of success value for each month month----success-----count 11------- -1 ------- 50 11------- 1 --------- 50 11------- 0 ------- 50 12------- -1 ------- 50 12------- 1 ------- 50 12------- 0 ------- 50 if there is no success value for a month then the count should be null or zero. I have tried with left outer join as well but of no use it gives the count incorrectly. Thanks in advance Sasi

    Read the article

  • Guid Primary /Foreign Key dilemma SQL Server

    - by Xience
    Hi guys, I am faced with the dilemma of changing my primary keys from int identities to Guid. I'll put my problem straight up. It's a typical Retail management app, with POS and back office functionality. Has about 100 tables. The database synchronizes with other databases and receives/ sends new data. Most tables don't have frequent inserts, updates or select statements executing on them. However, some do have frequent inserts and selects on them, eg. products and orders tables. Some tables have upto 4 foreign keys in them. If i changed my primary keys from 'int' to 'Guid', would there be a performance issue when inserting or querying data from tables that have many foreign keys. I know people have said that indexes will be fragmented and 16 bytes is an issue. Space wouldn't be an issue in my case and apparently index fragmentation can also be taken care of using 'NEWSEQUENTIALID()' function. Can someone tell me, from there experience, if Guid will be problematic in tables with many foreign keys. I'll be much appreciative of your thoughts on it...

    Read the article

  • Most optimal order (of joins) for left join

    - by Ram
    I have 3 tables Table1 (with 1020690 records), Table2(with 289425 records), Table 3(with 83692 records).I have something like this SELECT * FROM Table1 T1 /* OK fine select * is bad when not all columns are needed, this is just an example*/ LEFT JOIN Table2 T2 ON T1.id=T2.id LEFT JOIN Table3 T3 ON T1.id=T3.id and a query like this SELECT * FROM Table1 T1 LEFT JOIN Table3 T3 ON T1.id=T3.id LEFT JOIN Table2 T2 ON T1.id=T2.id The query plan shows me that it uses 2 Merge Join for both the joins. For the first query, the first merge is with T1 and T2 and then with T3. For the second query, the first merge is with T1 and T3 and then with T2. Both these queries take about the same time(40 seconds approx.) or sometimes Query1 takes couple of seconds longer. So my question is, does the join order matter ?

    Read the article

  • sql query by passing te values in one table

    - by subash
    can any one help me in generating query for the below scenario? i have twop tables TableA and TableB TableA has teh follwing columns EMPLOYEEID, SKILLSETCODE,CERTID, LASTNAME, FIRSTNAME, MIDDLEINITIAL TableB has two columns EMPLOYEEID and key_user i want to SELECT EMPLOYEEID, SKILLSETCODE,CERTID, LASTNAME, FIRSTNAME, MIDDLEINITIAL FROM TableA WHERE EMPLOYEEID = (select employeeid from TableB where key_user='249')

    Read the article

  • SQL: GROUP BY after JOIN without overriding rows?

    - by krismeld
    I have a table of basketball leagues, a table af teams and a table of players like this: LEAGUES ID | NAME | ------------------ 1 | NBA | 2 | ABA | TEAMS: ID | NAME | LEAGUE_ID ------------------------------ 20 | BULLS | 1 21 | KNICKS | 2 PLAYERS: ID | TEAM_ID | FIRST_NAME | LAST_NAME | --------------------------------------------- 1 | 21 | John | Starks | 2 | 21 | Patrick | Ewing | Given a League ID, I would like to retrieve all the players' names and their team ID from all the teams in that league, so I do this: SELECT t.id AS team_id, p.id AS player_id, p.first_name, p.last_name FROM teams AS t JOIN players AS p ON p.team_id = t.id WHERE t.league_id = 1 which returns: [0] => stdClass Object ( [team_id] => 21 [player_id] => 1 [first_name] => John [last_name] => Starks ) [1] => stdClass Object ( [team_id] => 21 [player_id] => 2 [first_name] => Patrick [last_name] => Ewing ) + around 500 more objects... Since I will use this result to populate a dropdown menu for each team containing each team's list of players, I would like to group my result by team ID, so the loop to create these dropdowns will only have to cycle through each team ID instead of all 500+ players each time. But when I use the GROUP BY like this: SELECT t.id AS team_id, p.id AS player_id, p.first_name, p.last_name FROM teams AS t JOIN players AS p ON p.team_id = t.id WHERE t.league_id = 1 GROUP BY t.id it only returns one player from each team like this, overriding all the other players on the same team because of the use of the same column names. [0] => stdClass Object ( [team_id] => 21 [player_id] => 2 [first_name] => Patrick [last_name] => Ewing ) [1] => stdClass Object ( [team_id] => 22 [player_id] => 31 [first_name] => Shawn [last_name] => Kemp ) etc... I would like to return something like this: [0] => stdClass Object ( [team_id] => 2 [player_id1] => 1 [first_name1] => John [last_name1] => Starks [player_id2] => 2 [first_name2] => Patrick [last_name2] => Ewing +10 more players from this team... ) +25 more teams... Is it possible somehow?

    Read the article

  • SQL query for selecting the firsts in a series by cloumn

    - by SP
    I'm having some trouble coming up with a query for what I am trying to do. I've got a table we'll call 'Movements' with the following columns: RecID(Key), Element(f-key), Time(datetime), Room(int) The table is holding a history of Movements for the Elements. One record contains the element the record is for, the time of the recorded location, and the room it was in at that time. What I would like are all records that indicate that an Element entered a room. That would mean the first (by time) entry for any element in a series of movements for that element in the same room. The input is a room number and a time. IE, I would like all of the records indicating that any Element entered room X after time Y. The closest I came was this Select Element, min(Time) from Movements where Time > Y and Room = x group by Element This will only give me one room entry record per Element though (If the Element has entered the room X twice since time Y I'll only get the first one back) Any ideas? Let me know if I have not explained this clearly. I'm using MS SQLServer 2005.

    Read the article

  • exact full text search - sql server 2005

    - by csetzkorn
    Hi, Is it possible to do an 'exact full text search' with CONTAINS. I have removed all noise words etc. but the dbms still seems to manipulate the 'exact word' (e.g. 'j-blade - blade'). Can I disable this? Thanks. Christian PS: I would like to avoid like because it is too slow and with exact I mean that the text contains the exact word.

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >